# quotacheck -avgum
quotacheck: Cannot find filesystem to check or filesystem not mounted with quota option.
# /var/log/dmesg
[ 0.655995] HugeTLB registered 2 MB page size, pre-allocated 0 pages
[ 0.656974] zpool: loaded
[ 0.656978] zbud: loaded
[ 0.657180] VFS: Disk quotas dquot_6.5.2
[ 0.657203] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[ 0.657366] Key type big_key registered
[ 0.657369] SELinux: Registering netfilter hooks
[ 0.658406] NET: Registered protocol family 38
# /var/log/quota_enable.log
journaled quota support: kernel supports, user space tools supports (available)
UUID=6cd50e51-cfc6-40b9-9ec5-f32fa2e4ff02 (enabling quotas)
The system will configure quotas on the “UUID=6cd50e51-cfc6-40b9-9ec5-f32fa2e4ff02” which is using the “xfs” filesystem.
A reboot will be required to enable quotas on xfs.
Updating Quota Files..........Done
Quotas have been enabled and updated.
Modifying the /etc/default/grub file to enable user quotas...
Running the "grub2-mkconfig" command to regenerate the system's boot configuration...
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-3.10.0-1127.19.1.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-1127.19.1.el7.x86_64.img
Found linux image: /boot/vmlinuz-3.10.0-1127.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-1127.el7.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-cab9605edaa5484da7c2f02b8fd10762
Found initrd image: /boot/initramfs-0-rescue-cab9605edaa5484da7c2f02b8fd10762.img
done
The '/' partition uses the XFS. filesystem. You must reboot the server to enable quotas.
# /var/log/audit/audit.log
:type=SERVICE_START msg=audit(1615572032.527:97): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=cpanelquotaonboot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
# /var/log/grubby
DBG: linuxefi /vmlinuz-3.10.0-1127.19.1.el7.x86_64 root=UUID=0ef2a656-b53f-49a2-8478-9a301b3b0617 ro rd.auto crashkernel=auto vga=normal nomodeset rootflags=uquota
# /var/log/cloud-init.log
2021-03-12 18:00:07,390 - util.py[DEBUG]: Fetched {'configfs': {'mountpoint': '/sys/kernel/config', 'opts': 'rw,relatime', 'fstype': 'configfs'}, 'efivarfs': {'mountpoint': '/sys/firmware/efi/efivars', 'opts': 'rw,nosuid,nodev,noexec,relatime', 'fstype': 'efivarfs'}, '/dev/loop0': {'mountpoint': '/var/tmp', 'opts': 'rw,nosuid,noexec,relatime,discard,data=ordered', 'fstype': 'ext4'}, 'devpts': {'mountpoint': '/dev/pts', 'opts': 'rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000', 'fstype': 'devpts'}, 'debugfs': {'mountpoint': '/sys/kernel/debug', 'opts': 'rw,relatime', 'fstype': 'debugfs'}, 'securityfs': {'mountpoint': '/sys/kernel/security', 'opts': 'rw,nosuid,nodev,noexec,relatime', 'fstype': 'securityfs'}, 'sysfs': {'mountpoint': '/sys', 'opts': 'rw,nosuid,nodev,noexec,relatime', 'fstype': 'sysfs'}, 'mqueue': {'mountpoint': '/dev/mqueue', 'opts': 'rw,relatime', 'fstype': 'mqueue'}, 'pstore': {'mountpoint': '/sys/fs/pstore', 'opts': 'rw,nosuid,nodev,noexec,relatime', 'fstype': 'pstore'}, '/dev/sda1': {'mountpoint': '/boot/efi', 'opts': 'rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,errors=remount-ro', 'fstype': 'vfat'}, 'hugetlbfs': {'mountpoint': '/dev/hugepages', 'opts': 'rw,relatime', 'fstype': 'hugetlbfs'}, 'systemd-1': {'mountpoint': '/proc/sys/fs/binfmt_misc', 'opts': 'rw,relatime,fd=31,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=13075', 'fstype': 'autofs'}, 'cgroup': {'mountpoint': '/sys/fs/cgroup/pids', 'opts': 'rw,nosuid,nodev,noexec,relatime,pids', 'fstype': 'cgroup'}, 'sunrpc': {'mountpoint': '/var/lib/nfs/rpc_pipefs', 'opts': 'rw,relatime', 'fstype': 'rpc_pipefs'}, '/dev/md2': {'mountpoint': '/boot', 'opts': 'rw,relatime,attr2,inode64,noquota', 'fstype': 'xfs'}, 'tmpfs': {'mountpoint': '/sys/fs/cgroup', 'opts': 'ro,nosuid,nodev,noexec,mode=755', 'fstype': 'tmpfs'}, 'proc': {'mountpoint': '/proc', 'opts': 'rw,nosuid,nodev,noexec,relatime', 'fstype': 'proc'}, 'devtmpfs': {'mountpoint': '/dev', 'opts': 'rw,nosuid,size=16217048k,nr_inodes=4054262,mode=755', 'fstype': 'devtmpfs'}, '/dev/md3': {'mountpoint': '/', 'opts': 'rw,relatime,attr2,inode64,noquota', 'fstype': 'xfs'}, 'rootfs': {'mountpoint': '/', 'opts': 'rw', 'fstype': 'rootfs'}} mounts from proc
# /var/log/messages
Mar 12 18:59:54 host kernel: VFS: Disk quotas dquot_6.5.2
Mar 12 19:00:16 host systemd: Starting cPanel fix quotas on boot...
Mar 12 19:00:19 host dracut: Executing: /usr/sbin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict -o "plymouth dash resume ifcfg" --mount "/dev/disk/by-uuid/0ef2a656-b53f-49a2-8478-9a301b3b0617 /sysroot xfs defaults,uquota" --no-hostonly-default-device -f /boot/initramfs-3.10.0-1127.19.1.el7.x86_64kdump.img 3.10.0-1127.19.1.el7.x86_64
/var/log/messages:Mar 12 19:00:32 host fixquotas-onboot: You must reboot the server to enable XFS. filesystem quotas.
/var/log/messages:Mar 12 19:00:32 host systemd: Started cPanel fix quotas on boot.
...
2021-03-12 18:00:15,683 - cc_growpart.py[DEBUG]: '/' SKIPPED: device_part_info(/dev/md3) failed: /dev/md3 not a partition
2021-03-12 18:00:15,698 - main.py[DEBUG]: Ran 12 modules with 0 failures
2021-03-12 18:00:17,111 - main.py[DEBUG]: Ran 13 modules with 0 failures
2021-03-12 18:00:19,061 - main.py[DEBUG]: Ran 10 modules with 0 failures
# /usr/local/cpanel/scripts/fixquotas
journaled quota support: kernel supports, user space tools supports (available)
UUID=0ef2a656-b53f-49a2-8478-9a301b3b0617 (already configured quotas = 1).
UUID=6fa0c2e6-8018-4134-bc19-04ede1f33c05 (already configured quotas = 0).
Updating Quota Files..........Done
Quotas have been enabled and updated.
You must reboot the server to enable XFS® filesystem quotas.
# cat /etc/fstab
UUID=0ef2a656-b53f-49a2-8478-9a301b3b0617 / xfs defaults,uquota 0 1
UUID=6fa0c2e6-8018-4134-bc19-04ede1f33c05 /boot xfs defaults 0 0
LABEL=EFI_SYSPART /boot/efi vfat defaults 0 1
UUID=eed9d5c5-223b-45ac-a8fb-748168e79183 swap swap defaults 0 0
UUID=c3b07c38-c633-4e79-850a-5ebd2608c166 swap swap defaults 0 0
/usr/tmpDSK /tmp ext3 defaults,noauto 0 0
# cat /var/log/quota_enable.log
journaled quota support: kernel supports, user space tools supports (available)
UUID=6cd50e51-cfc6-40b9-9ec5-f32fa2e4ff02 (enabling quotas)
The system will configure quotas on the “UUID=6cd50e51-cfc6-40b9-9ec5-f32fa2e4ff02” which is using the “xfs” filesystem.
A reboot will be required to enable quotas on xfs.
Updating Quota Files..........Done
Quotas have been enabled and updated.
Modifying the /etc/default/grub file to enable user quotas...
Running the "grub2-mkconfig" command to regenerate the system's boot configuration...
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-3.10.0-1127.19.1.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-1127.19.1.el7.x86_64.img
Found linux image: /boot/vmlinuz-3.10.0-1127.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-1127.el7.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-cab9605edaa5484da7c2f02b8fd10762
Found initrd image: /boot/initramfs-0-rescue-cab9605edaa5484da7c2f02b8fd10762.img
done
The '/' partition uses the XFS® filesystem. You must reboot the server to enable quotas.
# cat /etc/sysconfig/grub
GRUB_TIMEOUT=1
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL="serial console"
GRUB_SERIAL_COMMAND="serial"
GRUB_DISABLE_RECOVERY="true"
GRUB_DISABLE_LINUX_UUID="false"
GRUB_CMDLINE_LINUX="rd.auto crashkernel=auto vga=normal nomodeset rootflags=uquota"
GRUB_TERMINAL_OUTPUT=""
GRUB_ENABLE_BLSCFG=false
# ls -l /*.user
ls: cannot access /*.user: No such file or directory
# mount | grep noquota
/dev/md3 on / type xfs (rw,relatime,attr2,inode64,noquota)
/dev/md2 on /boot type xfs (rw,relatime,attr2,inode64,noquota)
[CODE]
No quotas detected with cpanel script:
[CODE]
# /usr/local/cpanel/scripts/resetquotas
Resetting quota for userXXX to 10240 M
No filesystems with quota detected.
Resetting quota for computes to 1000 M
No filesystems with quota detected.
yes, I tried this several times and no success. After rebooting the yellow error apears and no quotes are available.Sorry to hear you're still having issues. Since the machine is using XFS, did you try the steps outlined here?
![]()
How to enable quotas on servers using the XFS filesystem
Introduction To enable quotas on a server using XFS, you can use the WHM: Initial Quota Setup Feature. The feature will run the 'intiquotas' script. There is a requirement of an extra step ...support.cpanel.net
If so, I'd get back with OVH about the issue.
# grep xfs /etc/fstab
a656-b53f-49a2-8478-9a301b3b0617 / xfs defaults,uquota 0 1
c2e6-8018-4134-bc19-04ede1f33c05 /boot xfs defaults 0 0
# mount | grep xfs | grep -v virtfs
/dev/md3 on / type xfs (rw,relatime,attr2,inode64,noquota)
/dev/md2 on /boot type xfs (rw,relatime,attr2,inode64,noquota)
That's normal - it's just the name of the partition you're seeing, which will vary on each machine.Are the "/dev/mapper/centos_whm1-... " something that should be present?
# cat /var/log/demesg
[ 0.000557] Security Framework initialized
[ 0.000563] SELinux: Initializing.
[ 0.000570] SELinux: Starting in permissive mode
[ 0.000570] Yama: becoming mindful.
.....
[ 2.717217] random: systemd: uninitialized urandom read (16 bytes read)
[ 2.728484] systemd[1]: systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN)
[ 2.750214] systemd[1]: Detected architecture x86-64.
[ 2.761098] systemd[1]: Running in initial RAM disk.
.....
[ 4.245534] sd 6:0:0:0: [sda] Attached SCSI disk
[ 4.582737] md/raid1:md3: active with 2 out of 2 mirrors
[ 4.590622] md/raid1:md2: active with 2 out of 2 mirrors
[ 4.598120] md3: detected capacity change from 0 to 3999041126400
[ 4.598917] md2: detected capacity change from 0 to 536281088
[ 4.629658] random: fast init done
[ 4.711154] random: crng init done
[ 4.987533] SGI XFS with ACLs, security attributes, no debug enabled
[ 4.996805] XFS (md3): Mounting V4 Filesystem
[ 5.169382] XFS (md3): Ending clean mount
[ 5.715538] systemd-journald[134]: Received SIGTERM from PID 1 (systemd).
[ 6.196739] SELinux: Disabled at runtime.
[ 6.203327] SELinux: Unregistering netfilter hooks
# dmesg | grep -ir selinux
.cpanel/datastore/_usr_bin_gtar_--help: --no-selinux Disable the SELinux context support
.cpanel/datastore/_usr_bin_gtar_--help: --selinux Enable the SELinux context support
tmp/yum_save_tx.2020-11-09.03-12.UDbNmt.yumtx:mbr: libselinux-devel,x86_64,0,2.5,15.el7 70
tmp/yum_save_tx.2020-11-09.03-12.UDbNmt.yumtx: relatedto: libselinux-devel,x86_64,0,2.5,15.el7@a:dependson
tmp/yum_save_tx.2020-11-09.03-12.UDbNmt.yumtx: depends_on: libselinux-devel,x86_64,0,2.5,15.el7@a
anaconda-ks.cfg:# SELinux configuration
anaconda-ks.cfg:selinux --enforcing
anaconda-ks.cfg:echo "Fixing SELinux contexts."
original-ks.cfg:selinux --enforcing
original-ks.cfg:echo "Fixing SELinux contexts."
Hey man, I also am facing the same issue with quotas on OVHCLOUD dedicated server, I've found that file and changed UUID to my boot drive but didn't work. Can you elaborate on your solution please?I realise that this is an old thread - BUT this is the thread I kept landing on, so imagine others will too.
So the issue is:
- Have a dedicated server from OVH (might also be the case with servers from KimSufi or So You Start)
- You installed CentOS or AlmaLinux or Cloudlinux via the OVH interface
- Now you can't get disk quota to load on Cloudlinux / cPanel
- You've followed the cPanel guides listed here (for both EFI and UEFI systems)
- You've engaged Cloudlinux, cPanel, OVH and your System Administrators - none of which can solve the issue bar "reformat and start again" (not an option for me)
The fix
- For whatever reason, OVH provisioned servers try to bootstrap the system from a networked drive (I imagine a common bootloader for their servers)
- So all the changes you are making to your grub files simply aren't used!
- It comes down to a file called /.ovhrc, which in my case had a "BUILD_UUID" param set to a drive I didn't recognise.
- I edited the file to use my /boot drive's UUID
- Regenerated my grub files with "grub2-mkconfig -o /boot/efi/EFI/almalinux/grub.cfg" (running almalinux / CL 8)
- Had previously already edited the /etc/fstab and /etc/default/grub adding quota flags (otherwise you have to do that. Follow previously linked guides on this thread)
- Reboot and enjoy quotas!!!!!!
Same here, but following your instruction I not fix my quota issue.The fix
- For whatever reason, OVH provisioned servers try to bootstrap the system from a networked drive (I imagine a common bootloader for their servers)
- So all the changes you are making to your grub files simply aren't used!
- It comes down to a file called /.ovhrc, which in my case had a "BUILD_UUID" param set to a drive I didn't recognise.
- I edited the file to use my /boot drive's UUID
- Regenerated my grub files with "grub2-mkconfig -o /boot/efi/EFI/almalinux/grub.cfg" (running almalinux / CL 8)
- Had previously already edited the /etc/fstab and /etc/default/grub adding quota flags (otherwise you have to do that. Follow previously linked guides on this thread)
- Reboot and enjoy quotas!!!!!!
mount /dev/md127 /mnt
rsync -av /home/* /mnt/
umount /mnt ; mv /home /home-old ; mount /dev/md127 /home
ls /dev/disk/by-uuid
) and set /etc/fstab: UUID=xxxx /home xfs defaults,uquota 0 1
- and reboot/dev/md3 on / type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/md2 on /boot type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/md127 on /home type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,usrquota)