DNS Zones randomly not refreshing after changes - PowerDNS

d3c0y

Member
Nov 2, 2016
5
0
1
Australia
cPanel Access Level
Root Administrator
Hi there, I have had a couple of occurrences where I am updating zones through WHM and they are not refreshing properly until I manually edit the .db file and increment the SOA. Is this a common issue? It's only just popped up in the last month or two. There doesn't seem to be anything obvious in the messages log

  • CLOUDLINUX 6.10 xen pv [vps2]
  • v82.0.15
 
Last edited by a moderator:

cPanelLauren

Product Owner II
Staff member
Nov 14, 2017
13,266
1,300
363
Houston
Hi @d3c0y

When a zone is edited you should see the following in /var/log/messages:

Code:
Oct 10 11:45:47 server pdns_server: Reload was requested
Are you seeing anything like that?

Also, if you go to WHM>>Server Configuration>>Tweak Settings -> Logging -> dnsadmin logging level and set it to 9 - what is output to the dnsadmin log at /usr/local/cpanel/logs/dnsadmin_log
 

d3c0y

Member
Nov 2, 2016
5
0
1
Australia
cPanel Access Level
Root Administrator
Hi Lauren,

I have a ton of this in my messages log:

Oct 17 04:43:17 vps2 pdns[2443]: Non-fatal STL error in control listener command 'reload': failed in writen2: Broken pipe

Also when i make a change in DNS i can see in the messages log that it is just trying to reload over and over again:

Oct 17 15:13:21 vps2 pdns[2443]: Our pdns instance exited with code 1, respawning
Oct 17 15:13:22 vps2 pdns[2068804]: Guardian is launching an instance
Oct 17 15:13:22 vps2 pdns[2068804]: Reading random entropy from '/dev/urandom'
Oct 17 15:13:22 vps2 pdns[2068804]: Loading '/usr/lib64/pdns/libbindbackend.so'
Oct 17 15:13:22 vps2 pdns[2068804]: This is a guarded instance of pdns
Oct 17 15:13:22 vps2 pdns[2068804]: Unable to bind UDP socket to '0.0.0.0:53': Address already in use
Oct 17 15:13:22 vps2 pdns[2068804]: Fatal error: Unable to bind to UDP socket
Oct 17 15:13:23 vps2 pdns[2443]: Our pdns instance exited with code 1, respawning
Oct 17 15:13:24 vps2 pdns[2068812]: Guardian is launching an instance
Oct 17 15:13:24 vps2 pdns[2068812]: Reading random entropy from '/dev/urandom'
Oct 17 15:13:24 vps2 pdns[2068812]: Loading '/usr/lib64/pdns/libbindbackend.so'
Oct 17 15:13:24 vps2 pdns[2068812]: This is a guarded instance of pdns
Oct 17 15:13:24 vps2 pdns[2068812]: Unable to bind UDP socket to '0.0.0.0:53': Address already in use
Oct 17 15:13:24 vps2 pdns[2068812]: Fatal error: Unable to bind to UDP socket

Nothing of note in the dnsadmin_log everything seems normal there.
 
Last edited:

cPanelLauren

Product Owner II
Staff member
Nov 14, 2017
13,266
1,300
363
Houston
Yes! That will cause a ton of issues indeed, I'm really glad you found that. We have a case open for similar behavior that needs reproduction but since you've resolved the issue I will add this thread to the case for reference purposes. The case is CPANEL-28003 in the event you want to follow up with it at some time in the future in our changelogs
 
  • Like
Reactions: d3c0y

d3c0y

Member
Nov 2, 2016
5
0
1
Australia
cPanel Access Level
Root Administrator
Just an update for anyone else that encounters this issue with their cPanel server, even though named doesn't show up in the service manager in WHM doesn't mean the service isn't configured to auto-start on boot.

If you run chkconfig --list named from a shell you will probably find that it's still set to on for some of the run levels. Simply run chkconfig named off to disable.
Working exaple below.

Bash:
[email protected] [/etc/rc3.d]# chkconfig --list named
named           0:off   1:off   2:on    3:on    4:on    5:on    6:off
[email protected] [/etc/rc3.d]# chkconfig named off
[email protected] [/etc/rc3.d]# chkconfig --list named
named           0:off   1:off   2:off   3:off   4:off   5:off   6:off
 

cPanelLauren

Product Owner II
Staff member
Nov 14, 2017
13,266
1,300
363
Houston
Hi @d3c0y

I just checked in on that case and while it's not resolved it looks like another case actually resolved the problem, CPANEL-28972 re: timeout errors when setting up pdns which was released in v84 - have you continued to encounter this issue on servers running v84.0.21?