New CPanel install: Quotas impossible to fix

Operating System & Version
CentOS Linux release 7.9.2009 (Core)
cPanel & WHM Version
CENTOS 7.9 - v92.0.11

Mise

Well-Known Member
May 15, 2011
92
10
58
I have opened a ticket support. Please, delete this post
 
Last edited:

cPRex

Jurassic Moderator
Staff member
Oct 19, 2014
17,439
2,836
363
cPanel Access Level
Root Administrator
Hey there! Rather than removing the post you always have the option to keep the original errors you're seeing posted, and also post the ticket number (if it was opened with cPanel) so we can track this on our end. That way we can post the resolution and it may help someone else out in the future.
 
  • Like
Reactions: Mise

Mise

Well-Known Member
May 15, 2011
92
10
58
Ticket ID was 94304948. Although I'm sorry to say it was not very effective at those moments, with some pressure on my side to build a new server quickly because the OVH fire. And I leaved the ticket. Now is closed

The new server is working although with quotas still in error.

These are the logs and commands with relevant messages:

quotas not working:
Code:
# quotacheck -avgum
quotacheck: Cannot find filesystem to check or filesystem not mounted with quota option.
dmesg shows Disk quotas dquot_6.5.2 is present:
Code:
# /var/log/dmesg

[    0.655995] HugeTLB registered 2 MB page size, pre-allocated 0 pages
[    0.656974] zpool: loaded
[    0.656978] zbud: loaded
[    0.657180] VFS: Disk quotas dquot_6.5.2
[    0.657203] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[    0.657366] Key type big_key registered
[    0.657369] SELinux:  Registering netfilter hooks
[    0.658406] NET: Registered protocol family 38
quotas are configured:
Code:
# /var/log/quota_enable.log

journaled quota support: kernel supports, user space tools supports (available)
UUID=6cd50e51-cfc6-40b9-9ec5-f32fa2e4ff02 (enabling quotas)
The system will configure quotas on the “UUID=6cd50e51-cfc6-40b9-9ec5-f32fa2e4ff02” which is using the “xfs” filesystem.
A reboot will be required to enable quotas on xfs.
Updating Quota Files..........Done
Quotas have been enabled and updated.
Modifying the /etc/default/grub file to enable user quotas...
Running the "grub2-mkconfig" command to regenerate the system's boot configuration...
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-3.10.0-1127.19.1.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-1127.19.1.el7.x86_64.img
Found linux image: /boot/vmlinuz-3.10.0-1127.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-1127.el7.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-cab9605edaa5484da7c2f02b8fd10762
Found initrd image: /boot/initramfs-0-rescue-cab9605edaa5484da7c2f02b8fd10762.img
done

The '/' partition uses the XFS. filesystem. You must reboot the server to enable quotas.
audit and grub:
Code:
# /var/log/audit/audit.log
:type=SERVICE_START msg=audit(1615572032.527:97): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=cpanelquotaonboot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'


# /var/log/grubby
DBG:    linuxefi /vmlinuz-3.10.0-1127.19.1.el7.x86_64 root=UUID=0ef2a656-b53f-49a2-8478-9a301b3b0617 ro rd.auto  crashkernel=auto vga=normal nomodeset rootflags=uquota
cloud-init.log
Code:
# /var/log/cloud-init.log

2021-03-12 18:00:07,390 - util.py[DEBUG]: Fetched {'configfs': {'mountpoint': '/sys/kernel/config', 'opts': 'rw,relatime', 'fstype': 'configfs'}, 'efivarfs': {'mountpoint': '/sys/firmware/efi/efivars', 'opts': 'rw,nosuid,nodev,noexec,relatime', 'fstype': 'efivarfs'}, '/dev/loop0': {'mountpoint': '/var/tmp', 'opts': 'rw,nosuid,noexec,relatime,discard,data=ordered', 'fstype': 'ext4'}, 'devpts': {'mountpoint': '/dev/pts', 'opts': 'rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000', 'fstype': 'devpts'}, 'debugfs': {'mountpoint': '/sys/kernel/debug', 'opts': 'rw,relatime', 'fstype': 'debugfs'}, 'securityfs': {'mountpoint': '/sys/kernel/security', 'opts': 'rw,nosuid,nodev,noexec,relatime', 'fstype': 'securityfs'}, 'sysfs': {'mountpoint': '/sys', 'opts': 'rw,nosuid,nodev,noexec,relatime', 'fstype': 'sysfs'}, 'mqueue': {'mountpoint': '/dev/mqueue', 'opts': 'rw,relatime', 'fstype': 'mqueue'}, 'pstore': {'mountpoint': '/sys/fs/pstore', 'opts': 'rw,nosuid,nodev,noexec,relatime', 'fstype': 'pstore'}, '/dev/sda1': {'mountpoint': '/boot/efi', 'opts': 'rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,errors=remount-ro', 'fstype': 'vfat'}, 'hugetlbfs': {'mountpoint': '/dev/hugepages', 'opts': 'rw,relatime', 'fstype': 'hugetlbfs'}, 'systemd-1': {'mountpoint': '/proc/sys/fs/binfmt_misc', 'opts': 'rw,relatime,fd=31,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=13075', 'fstype': 'autofs'}, 'cgroup': {'mountpoint': '/sys/fs/cgroup/pids', 'opts': 'rw,nosuid,nodev,noexec,relatime,pids', 'fstype': 'cgroup'}, 'sunrpc': {'mountpoint': '/var/lib/nfs/rpc_pipefs', 'opts': 'rw,relatime', 'fstype': 'rpc_pipefs'}, '/dev/md2': {'mountpoint': '/boot', 'opts': 'rw,relatime,attr2,inode64,noquota', 'fstype': 'xfs'}, 'tmpfs': {'mountpoint': '/sys/fs/cgroup', 'opts': 'ro,nosuid,nodev,noexec,mode=755', 'fstype': 'tmpfs'}, 'proc': {'mountpoint': '/proc', 'opts': 'rw,nosuid,nodev,noexec,relatime', 'fstype': 'proc'}, 'devtmpfs': {'mountpoint': '/dev', 'opts': 'rw,nosuid,size=16217048k,nr_inodes=4054262,mode=755', 'fstype': 'devtmpfs'}, '/dev/md3': {'mountpoint': '/', 'opts': 'rw,relatime,attr2,inode64,noquota', 'fstype': 'xfs'}, 'rootfs': {'mountpoint': '/', 'opts': 'rw', 'fstype': 'rootfs'}} mounts from proc


boot messages:
Code:
# /var/log/messages

Mar 12 18:59:54 host kernel: VFS: Disk quotas dquot_6.5.2
Mar 12 19:00:16 host systemd: Starting cPanel fix quotas on boot...
Mar 12 19:00:19 host dracut: Executing: /usr/sbin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict -o "plymouth dash resume ifcfg" --mount "/dev/disk/by-uuid/0ef2a656-b53f-49a2-8478-9a301b3b0617 /sysroot xfs defaults,uquota" --no-hostonly-default-device -f /boot/initramfs-3.10.0-1127.19.1.el7.x86_64kdump.img 3.10.0-1127.19.1.el7.x86_64
/var/log/messages:Mar 12 19:00:32 host fixquotas-onboot: You must reboot the server to enable XFS. filesystem quotas.
/var/log/messages:Mar 12 19:00:32 host systemd: Started cPanel fix quotas on boot.
...
2021-03-12 18:00:15,683 - cc_growpart.py[DEBUG]: '/' SKIPPED: device_part_info(/dev/md3) failed: /dev/md3 not a partition
2021-03-12 18:00:15,698 - main.py[DEBUG]: Ran 12 modules with 0 failures
2021-03-12 18:00:17,111 - main.py[DEBUG]: Ran 13 modules with 0 failures
2021-03-12 18:00:19,061 - main.py[DEBUG]: Ran 10 modules with 0 failures

kernel support quotas:
Code:
# /usr/local/cpanel/scripts/fixquotas

journaled quota support: kernel supports, user space tools supports (available)
UUID=0ef2a656-b53f-49a2-8478-9a301b3b0617 (already configured quotas = 1).
UUID=6fa0c2e6-8018-4134-bc19-04ede1f33c05 (already configured quotas = 0).
Updating Quota Files..........Done
Quotas have been enabled and updated.

You must reboot the server to enable XFS® filesystem quotas.

fstab and xfs:
Code:
# cat /etc/fstab
UUID=0ef2a656-b53f-49a2-8478-9a301b3b0617       /       xfs     defaults,uquota 0       1
UUID=6fa0c2e6-8018-4134-bc19-04ede1f33c05       /boot   xfs     defaults        0       0
LABEL=EFI_SYSPART       /boot/efi       vfat    defaults        0       1
UUID=eed9d5c5-223b-45ac-a8fb-748168e79183       swap    swap    defaults        0       0
UUID=c3b07c38-c633-4e79-850a-5ebd2608c166       swap    swap    defaults        0       0
/usr/tmpDSK             /tmp                    ext3    defaults,noauto        0 0

quota_enable.log
Code:
# cat /var/log/quota_enable.log
journaled quota support: kernel supports, user space tools supports (available)
UUID=6cd50e51-cfc6-40b9-9ec5-f32fa2e4ff02 (enabling quotas)
The system will configure quotas on the “UUID=6cd50e51-cfc6-40b9-9ec5-f32fa2e4ff02” which is using the “xfs” filesystem.
A reboot will be required to enable quotas on xfs.
Updating Quota Files..........Done
Quotas have been enabled and updated.
Modifying the /etc/default/grub file to enable user quotas...
Running the "grub2-mkconfig" command to regenerate the system's boot configuration...
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-3.10.0-1127.19.1.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-1127.19.1.el7.x86_64.img
Found linux image: /boot/vmlinuz-3.10.0-1127.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-1127.el7.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-cab9605edaa5484da7c2f02b8fd10762
Found initrd image: /boot/initramfs-0-rescue-cab9605edaa5484da7c2f02b8fd10762.img
done

The '/' partition uses the XFS® filesystem. You must reboot the server to enable quotas.

grub (I have tried the rebuild several times)
Code:
# cat /etc/sysconfig/grub
GRUB_TIMEOUT=1
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL="serial console"
GRUB_SERIAL_COMMAND="serial"
GRUB_DISABLE_RECOVERY="true"
GRUB_DISABLE_LINUX_UUID="false"
GRUB_CMDLINE_LINUX="rd.auto  crashkernel=auto vga=normal nomodeset rootflags=uquota"
GRUB_TERMINAL_OUTPUT=""
GRUB_ENABLE_BLSCFG=false
more:
Code:
# ls -l /*.user
ls: cannot access /*.user: No such file or directory


# mount | grep noquota
/dev/md3 on / type xfs (rw,relatime,attr2,inode64,noquota)
/dev/md2 on /boot type xfs (rw,relatime,attr2,inode64,noquota)
[CODE]


No quotas detected with cpanel script:

[CODE]
# /usr/local/cpanel/scripts/resetquotas

Resetting quota for userXXX to 10240 M
No filesystems with quota detected.
Resetting quota for computes to 1000 M
No filesystems with quota detected.
No idea really. I have found in internet some people with a similar problem with quotas using Debian. It seem their server provider failed to include the packet linux-image-extra-virtual inside their own iso's. They solved the error after installing this packet. I suspect it can be also OVH, they use its own iso's and not the first time with these problems. I wonder if perhaps there is the same problem with CentOS.

The packet linux-image-extra-virtual is a superset of the kernel for Debian distros. It is not part of CentOS. However, maybe something similar can happens here.
I don't know, really

Any help would be appreciated.
 
Last edited:

cPRex

Jurassic Moderator
Staff member
Oct 19, 2014
17,439
2,836
363
cPanel Access Level
Root Administrator
Sorry to hear you're still having issues. Since the machine is using XFS, did you try the steps outlined here?


If so, I'd get back with OVH about the issue.
 

Mise

Well-Known Member
May 15, 2011
92
10
58
Sorry to hear you're still having issues. Since the machine is using XFS, did you try the steps outlined here?


If so, I'd get back with OVH about the issue.
yes, I tried this several times and no success. After rebooting the yellow error apears and no quotes are available.

It seems the failure is at mounting:

Code:
# grep xfs /etc/fstab
a656-b53f-49a2-8478-9a301b3b0617       /       xfs     defaults,uquota 0       1
c2e6-8018-4134-bc19-04ede1f33c05       /boot   xfs     defaults        0       0

# mount | grep xfs | grep -v virtfs
/dev/md3 on / type xfs (rw,relatime,attr2,inode64,noquota)
/dev/md2 on /boot type xfs (rw,relatime,attr2,inode64,noquota)
see that "no quota". There is something denying the mounting at booting. Although I cannot see something related inside the logs.

Btw, the "# mount | grep xfs | grep -v virtfs" returns a different panorame than the guide example.
Are the "/dev/mapper/centos_whm1-... " something that should be present?


I will open a ticket to OVH, although I know they will say this is not their problem but the configuration problems are the user problems.

For that reason I was a little dissapointed when I have opened a ticket support in CPanel. I expected a better investigation to identify the problem in the system, be the kernel or whatever. Then I could write to OVH with some arguments to force a solution by their side.

I will try to find the cause before opening a ticket with them

thanks
 
Last edited:

cPRex

Jurassic Moderator
Staff member
Oct 19, 2014
17,439
2,836
363
cPanel Access Level
Root Administrator
Are the "/dev/mapper/centos_whm1-... " something that should be present?
That's normal - it's just the name of the partition you're seeing, which will vary on each machine.

There are known issues with some of the OVH kernels, so it's possible that is the root of the issue as well, but I hope they'll be able to get you better details on that.
 

Mise

Well-Known Member
May 15, 2011
92
10
58
Hi, perhaps you can help me to solve this issue.

I have find in internet about Selinux can interfer in mounting quotas.

I know Cpanel disble Selinux, however Inside the logs I see this sequence:

Code:
 # cat /var/log/demesg
[    0.000557] Security Framework initialized
[    0.000563] SELinux:  Initializing.
[    0.000570] SELinux:  Starting in permissive mode
[    0.000570] Yama: becoming mindful.
.....
[    2.717217] random: systemd: uninitialized urandom read (16 bytes read)
[    2.728484] systemd[1]: systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN)
[    2.750214] systemd[1]: Detected architecture x86-64.
[    2.761098] systemd[1]: Running in initial RAM disk.
.....
[    4.245534] sd 6:0:0:0: [sda] Attached SCSI disk
[    4.582737] md/raid1:md3: active with 2 out of 2 mirrors
[    4.590622] md/raid1:md2: active with 2 out of 2 mirrors
[    4.598120] md3: detected capacity change from 0 to 3999041126400
[    4.598917] md2: detected capacity change from 0 to 536281088
[    4.629658] random: fast init done
[    4.711154] random: crng init done
[    4.987533] SGI XFS with ACLs, security attributes, no debug enabled
[    4.996805] XFS (md3): Mounting V4 Filesystem
[    5.169382] XFS (md3): Ending clean mount
[    5.715538] systemd-journald[134]: Received SIGTERM from PID 1 (systemd).
[    6.196739] SELinux:  Disabled at runtime.
[    6.203327] SELinux:  Unregistering netfilter hooks
it seems Selinux is enabled at boot, then there is an attempt to mount with XFS, and later Selinux is disabled.

Therefore Selinux wouldn't be disabled when the XFS mounts, impeding the quotas mounting.

Is there some way to disable Selinux at starting?


I don't know what process is configured to start Selinux at boot. Inside /root I can find these mentions to Selinux:

Code:
 # dmesg | grep -ir selinux
.cpanel/datastore/_usr_bin_gtar_--help:      --no-selinux           Disable the SELinux context support
.cpanel/datastore/_usr_bin_gtar_--help:      --selinux              Enable the SELinux context support
tmp/yum_save_tx.2020-11-09.03-12.UDbNmt.yumtx:mbr: libselinux-devel,x86_64,0,2.5,15.el7 70
tmp/yum_save_tx.2020-11-09.03-12.UDbNmt.yumtx:  relatedto: libselinux-devel,x86_64,0,2.5,15.el7@a:dependson
tmp/yum_save_tx.2020-11-09.03-12.UDbNmt.yumtx:  depends_on: libselinux-devel,x86_64,0,2.5,15.el7@a
anaconda-ks.cfg:# SELinux configuration
anaconda-ks.cfg:selinux --enforcing
anaconda-ks.cfg:echo "Fixing SELinux contexts."
original-ks.cfg:selinux --enforcing
original-ks.cfg:echo "Fixing SELinux contexts."

How can I locate the process starting Selinux before XFS starts?


thanks
 

Mise

Well-Known Member
May 15, 2011
92
10
58
yes, sestatus returns "disable".

Please, see the logs the line:
[ 0.000570] SELinux: Starting in permissive mode

It seems Selinux starts at boot with "permissive" mode and impeding XFS, and later it is fixed like disabled, although XFS was not mounted.

My question is how can I impede Selinux starting like "permissive" at boot

thanks!
 

cPRex

Jurassic Moderator
Staff member
Oct 19, 2014
17,439
2,836
363
cPanel Access Level
Root Administrator
That part I'm not sure - if it's disabled, I would not expect that to be starting at all, but that would be something for your hosting provider to check as that isn't part of cPanel but part of the operating system itself. Can you have them look into that and see what they have to say?
 
  • Like
Reactions: Mise

Mise

Well-Known Member
May 15, 2011
92
10
58
yes, now it seems clear it is not a CPanel failure but an error in the OS installation process
Still I'm waiting a definitive answer by their side... :(.
 

Mise

Well-Known Member
May 15, 2011
92
10
58
two months later finally they have manifested to say I can reinstall the server because they cannot offer another solution.

The non-existent support can be explained because it seems the operator chose a wrong template to install the dedicated server. According install logs, he installed several cloud services and other thing, as if the server should be devoted to a cloud network or something similar. That install template configured the boot process and selinux behavior in the very beginning, therefore forever and ever, and now Cpanel is not able to fix quotes problem. I dont' have idea how to bypass this problem still more when OVH kernels and boots are modified by themselves

Also still we are waiting the promised return of money because the fire.

I had a better support in AliExpress with the EVA bot to return a 0.8$ screw than in OVH spending more than 1.000$ / year


thanks anyway, not the CPanel fault.
 

syedc

Registered
Feb 5, 2020
3
2
3
London
cPanel Access Level
Root Administrator
I realise that this is an old thread - BUT this is the thread I kept landing on, so imagine others will too.

So the issue is:
  • Have a dedicated server from OVH (might also be the case with servers from KimSufi or So You Start)
  • You installed CentOS or AlmaLinux or Cloudlinux via the OVH interface
  • Now you can't get disk quota to load on Cloudlinux / cPanel
  • You've followed the cPanel guides listed here (for both EFI and UEFI systems)
  • You've engaged Cloudlinux, cPanel, OVH and your System Administrators - none of which can solve the issue bar "reformat and start again" (not an option for me)

The fix
  • For whatever reason, OVH provisioned servers try to bootstrap the system from a networked drive (I imagine a common bootloader for their servers)
  • So all the changes you are making to your grub files simply aren't used!
  • It comes down to a file called /.ovhrc, which in my case had a "BUILD_UUID" param set to a drive I didn't recognise.
  • I edited the file to use my /boot drive's UUID
  • Regenerated my grub files with "grub2-mkconfig -o /boot/efi/EFI/almalinux/grub.cfg" (running almalinux / CL 8)
  • Had previously already edited the /etc/fstab and /etc/default/grub adding quota flags (otherwise you have to do that. Follow previously linked guides on this thread)
  • Reboot and enjoy quotas!!!!!!
 
  • Like
Reactions: FadiObaji and cPRex

FadiObaji

Registered
Dec 5, 2022
1
0
1
Turkey
cPanel Access Level
Root Administrator
I realise that this is an old thread - BUT this is the thread I kept landing on, so imagine others will too.

So the issue is:
  • Have a dedicated server from OVH (might also be the case with servers from KimSufi or So You Start)
  • You installed CentOS or AlmaLinux or Cloudlinux via the OVH interface
  • Now you can't get disk quota to load on Cloudlinux / cPanel
  • You've followed the cPanel guides listed here (for both EFI and UEFI systems)
  • You've engaged Cloudlinux, cPanel, OVH and your System Administrators - none of which can solve the issue bar "reformat and start again" (not an option for me)

The fix
  • For whatever reason, OVH provisioned servers try to bootstrap the system from a networked drive (I imagine a common bootloader for their servers)
  • So all the changes you are making to your grub files simply aren't used!
  • It comes down to a file called /.ovhrc, which in my case had a "BUILD_UUID" param set to a drive I didn't recognise.
  • I edited the file to use my /boot drive's UUID
  • Regenerated my grub files with "grub2-mkconfig -o /boot/efi/EFI/almalinux/grub.cfg" (running almalinux / CL 8)
  • Had previously already edited the /etc/fstab and /etc/default/grub adding quota flags (otherwise you have to do that. Follow previously linked guides on this thread)
  • Reboot and enjoy quotas!!!!!!
Hey man, I also am facing the same issue with quotas on OVHCLOUD dedicated server, I've found that file and changed UUID to my boot drive but didn't work. Can you elaborate on your solution please?
 

reficul

Member
Dec 15, 2008
20
0
51
Italy
cPanel Access Level
Root Administrator
The fix
  • For whatever reason, OVH provisioned servers try to bootstrap the system from a networked drive (I imagine a common bootloader for their servers)
  • So all the changes you are making to your grub files simply aren't used!
  • It comes down to a file called /.ovhrc, which in my case had a "BUILD_UUID" param set to a drive I didn't recognise.
  • I edited the file to use my /boot drive's UUID
  • Regenerated my grub files with "grub2-mkconfig -o /boot/efi/EFI/almalinux/grub.cfg" (running almalinux / CL 8)
  • Had previously already edited the /etc/fstab and /etc/default/grub adding quota flags (otherwise you have to do that. Follow previously linked guides on this thread)
  • Reboot and enjoy quotas!!!!!!
Same here, but following your instruction I not fix my quota issue.
Are your .ovhrc in / ? mine is on /root folder.
 
Last edited:

Arvy

Well-Known Member
Oct 3, 2006
150
11
168
Brazil
cPanel Access Level
Root Administrator
Twitter
Not working for me too.

It's a Rocky Linux 8.7 on OVH. Now (cPanel v106) AlmaLinux 9.1, Rocky 9 or Debian 20-11 are not available. OVH had only option to run under Rocky 8.7.

Changing /root/.ovhrc had no effect. Grub.cfg with quotas enabled. Fstab too. Backups disabled (due to fixquotas doesn't let enable quotas in the backup partition).

It's a 4x2 Tb disks server. Maybe an option is use 2 disks for / and 2 disks for /home:

- install Linux, using OVH panel, only in 2 HDDs (sda and sdb).
- create a new RAID using sdc and sdd: How to Set Up Software RAID 1 on an Existing Linux Distribution
- format it with mkfs.xfs - probably called /dev/md127 (after reboot)
- mount it temporarily: mount /dev/md127 /mnt
- rsync /home to /mnt and confirm that both /home and /mnt are identical: rsync -av /home/* /mnt/
- switch with: umount /mnt ; mv /home /home-old ; mount /dev/md127 /home
- get the UUID of md127 (ls /dev/disk/by-uuid) and set /etc/fstab: UUID=xxxx /home xfs defaults,uquota 0 1 - and reboot
- do the initquotas, fixquotas stuff and so, and reboot
- if everything is ok, you can delete the old home (backup) at /home-old
- after, use backups in /backup (noquota)

Works in 2 disks too, but with no RAID, since we need 2 different disks :(

Since the problem is the / partition, the manually-created RAID with sdc and sdd have quotas enabled.

/dev/md3 on / type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/md2 on /boot type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/md127 on /home type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,usrquota)
 
Last edited: