Cpanel Update Pre Maintenance didn't exit cleanly (256)

Operating System & Version
Cloudlinux 7.7
cPanel & WHM Version
86.0.18

Cloud9

Well-Known Member
Sep 17, 2012
46
1
58
UK
cPanel Access Level
Root Administrator
running

Code:
/usr/local/cpanel/scripts/upcp
I get

Code:
=> Log opened from cPanel Update (upcp) - Slave (771614) at Wed Apr 29 21:04:28 2020
[2020-04-29 21:04:28 +0100] E Pre Maintenance ended, however it did not exit cleanly (256). The following events were logged: "scripts/rpmup". Please check the logs for an indication of what happened
[2020-04-29 21:04:29 +0100]   95% complete
[2020-04-29 21:04:29 +0100]   Running /usr/local/cpanel/scripts/postupcp
=> Log closed Wed Apr 29 21:04:29 2020
[2020-04-29 21:04:55 +0100]   Running Standardized hooks
[2020-04-29 21:04:59 +0100]   100% complete
[2020-04-29 21:04:59 +0100]  
[2020-04-29 21:04:59 +0100]     cPanel update completed
[2020-04-29 21:04:59 +0100]   A log of this update is available at /var/cpanel/updatelogs/update.1588190130.log
[2020-04-29 21:04:59 +0100]   Removing upcp pidfile
[2020-04-29 21:04:59 +0100]  
[2020-04-29 21:04:59 +0100] Completed all updates
=> Log closed Wed Apr 29 21:04:59 2020
If I then run

Code:
/usr/local/cpanel/scripts/rpmup
I get

Code:
--> Finished Dependency Resolution


Total size: 261 M
Total download size: 19 M
No Presto metadata available for cloudlinux-x86_64-server-7

info [rpmup] Completed yum execution “--assumeyes --config /etc/yum.conf update --disablerepo=epel”: in 68.287 second(s).
(XID jtu6qd) “/usr/bin/yum” reported error code “1” when it ended:
checkyum version 22.3  (excludes: bind-chroot ruby)
I have rebuilt the rpm database in whm - but still get the above errors
 

cPanelLauren

Product Owner
Staff member
Nov 14, 2017
13,295
1,257
313
Houston
Was the rpmup error the issue actually the issue with maintenance completing? You'd need to look at the logs to determine this. They're noted at the bottom of the output here:
Code:
/var/cpanel/updatelogs/update.1588190130.log
 

Cloud9

Well-Known Member
Sep 17, 2012
46
1
58
UK
cPanel Access Level
Root Administrator
Was the rpmup error the issue actually the issue with maintenance completing? You'd need to look at the logs to determine this. They're noted at the bottom of the output here:
Code:
/var/cpanel/updatelogs/update.1588190130.log
The first post was the end of the log

This is in the middle but pretty much the same as above, the rest of the log looks clean

Code:
[2020-04-29 21:04:12 +0100]     [/usr/local/cpanel/scripts/rpmup] Total download size: 261 M
[2020-04-29 21:04:12 +0100]     [/usr/local/cpanel/scripts/rpmup] Downloading packages:
[2020-04-29 21:04:12 +0100]     [/usr/local/cpanel/scripts/rpmup] No Presto metadata available for cloudlinux-x86_64-server-7
[2020-04-29 21:04:12 +0100]     [/usr/local/cpanel/scripts/rpmup]
[2020-04-29 21:04:12 +0100]     [/usr/local/cpanel/scripts/rpmup] (XID hapkja) “/usr/bin/yum” reported error code “1” when it ended:
[2020-04-29 21:04:12 +0100]     [/usr/local/cpanel/scripts/rpmup] checkyum version 22.3  (excludes: bind-chroot ruby)
[2020-04-29 21:04:12 +0100] E    [/usr/local/cpanel/scripts/rpmup] The “/usr/local/cpanel/scripts/rpmup” command (process 771707) reported error number 1 when it en$
[2020-04-29 21:04:12 +0100]   The Administrator will be notified to review this output when this script completes
[2020-04-29 21:04:12 +0100]    - Finished command `/usr/local/cpanel/scripts/rpmup` in 513.389 seconds
[2020-04-29 21:04:12 +0100]   26% complete
[2020-04-29 21:04:12 +0100]    - Finished in 513.389 seconds
 

cPanelLauren

Product Owner
Staff member
Nov 14, 2017
13,295
1,257
313
Houston
You know, I just realized there was an issue with the CloudLinux repos. Has this error occurred again on nightly maintenance? There was a synchronization issue with one of the internal mirrors CloudLinux maintains.
 

cPanelLauren

Product Owner
Staff member
Nov 14, 2017
13,295
1,257
313
Houston
Good to know, if you have any errors again on the updates (an update run just normally without force or moving to a new version) let us know but I think that issue with the mirror was fixed Wednesday afternoon.