[CVE-2024-4076] serve-stale zversion crash
Quick Links | |
---|---|
Incident Manager: | @matthijs |
Deputy Incident Manager: | @peterd |
Public Disclosure Date: | 2024-07-23 |
CVSS Score: | 7.5 |
Security Advisory: | isc-private/printing-press!106 |
Mattermost Channel: | CVE-2024-4076 |
Support Ticket: | N/A |
Release Checklist: | #4735 (closed) |
Earlier Than T-5
-
🔗 (IM) Pick a Deputy Incident Manager -
🔗 (IM) N/A Respond to the bug reporter -
🔗 (SwEng) Ensure there are no public merge requests which inadvertently disclose the issue -
🔗 (IM) Assign a CVE identifier -
🔗 (SwEng) Update this issue with the assigned CVE identifier and the CVSS score -
🔗 (SwEng) Determine the range of product versions affected (including the Subscription Edition) -
🔗 (SwEng) Determine whether workarounds for the problem exist -
🔗 (SwEng) N/A If necessary, coordinate with other parties -
🔗 (Support) Prepare "earliest" notification text and hand it off to Marketing -
🔗 (Marketing) Update "earliest" notification document in SF portal and send bulk email to earliest customers -
🔗 (Support) Create a merge request for the Security Advisory and include all readily available information in it -
🚫 🔗 (SwEng) Prepare a private merge request containing a system test reproducing the problem -
🚫 🔗 (SwEng) Notify Support when a reproducer is ready -
🔗 (SwEng) Prepare a detailed explanation of the code flow triggering the problem -
🔗 (SwEng) Prepare a private merge request with the fix -
🔗 (SwEng) Ensure the merge request with the fix is reviewed and has no outstanding discussions -
🔗 (Support) Review the documentation changes introduced by the merge request with the fix -
🔗 (SwEng) Prepare backports of the merge request addressing the problem for all affected (and still maintained) branches of a given product -
🔗 (Support) Finish preparing the Security Advisory -
🔗 (QA) Create (or update) the private issue containing links to fixes & reproducers for all CVEs fixed in a given release cycle -
🔗 (QA) (BIND 9 only) Reserve a block ofCHANGES
placeholders once the complete set of vulnerabilities fixed in a given release cycle is determined -
🔗 (QA) Merge the CVE fixes in CVE identifier order -
🔗 (QA) Prepare a standalone patch for the last stable release of each affected (and still maintained) product branch -
🔗 (QA) Prepare ASN releases (as outlined in the Release Checklist)
At T-5
-
🔗 (Marketing) Update the text on the T-5 (from the Printing Press project) and "earliest" ASN documents in the SF portal -
🔗 (Marketing) (BIND 9 only) Update the BIND -S information document in SF with download links to the new versions -
🔗 (Marketing) Bulk email eligible customers to check the SF portal -
🔗 (Marketing) (BIND 9 only) Send a pre-announcement email to the bind-announce mailing list to alert users that the upcoming release will include security fixes
At T-1
-
🔗 (First IM) Send notifications to OS packagers
On the Day of Public Disclosure
-
🔗 (IM) Grant QA & Marketing clearance to proceed with public release -
🔗 (QA/Marketing) Publish the releases (as outlined in the release checklist) -
🔗 (Support) (BIND 9 only) Add the new CVEs to the vulnerability matrix in the Knowledge Base -
🔗 (Support) Bump Document Version for the Security Advisory and publish it in the Knowledge Base -
🔗 (First IM) Send notification emails to third parties -
🔗 (First IM) Advise MITRE about the disclosed CVEs -
🔗 (First IM) Merge the Security Advisory merge request -
🔗 (IM) Inform original reporter (if external) that the security disclosure process is complete -
🔗 (Marketing) Update the SF portal to clear the ASN -
🔗 (Marketing) Email ASN recipients that the embargo is lifted
After Public Disclosure
-
🚫 🔗 (QA) Merge a regression test reproducing the bug into all affected (and still maintained) branches
Summary
At least since version 9.19.18 I'm experiencing random ABRT abends. Systemd immediately restarts the service. It happens between every few hours and few days.
Due to "isc_assertion_failed" in the stacktrace, I classified this bug as possibly security-related.
BIND version affected
9.19.19-1+0~20231220.107+debian11~1.gbpfc5ec0-Debian The last stable version 9.18 was running a week without this issue.
What is the current bug behavior?
named randomly crashes with an ABRT.
What is the expected correct behavior?
no crashes
Relevant configuration files
options {
directory "/var/cache/bind";
listen-on port 53 {
99.99.99.197/32;
99.99.99.1/32;
99.99.99.1/32;
};
listen-on port 443 tls "local-tls" http "default" {
99.99.99.197/32;
99.99.99.1/32;
};
listen-on-v6 port 53 {
99:99:99:99::2/128;
99:99::c0a8:2b01/128;
};
listen-on-v6 port 443 tls "local-tls" http "default" {
99:99:99:99::2/128;
99:99::c0a8:2b01/128;
};
notify-rate 1;
querylog no;
recursive-clients 50;
tcp-advertised-timeout 600;
tcp-idle-timeout 600;
tcp-initial-timeout 100;
tcp-keepalive-timeout 600;
version "hidden";
allow-recursion {
"any";
};
attach-cache "global";
dnssec-validation auto;
max-cache-size 65%;
max-cache-ttl 604800;
max-ncache-ttl 1800;
max-recursion-depth 5;
max-recursion-queries 50;
max-stale-ttl 604800;
min-cache-ttl 60;
min-ncache-ttl 30;
qname-minimization relaxed;
rate-limit {
all-per-second 15;
exempt-clients {
99.99.99.0/24;
"localhost";
};
ipv4-prefix-length 24;
ipv6-prefix-length 60;
qps-scale 30;
responses-per-second 15;
slip 3;
window 20;
};
response-policy {
zone "rpz";
zone "rpz-adblock";
};
servfail-ttl 30;
stale-answer-enable yes;
stale-cache-enable yes;
max-refresh-time 172800;
max-transfer-idle-in 15;
max-transfer-idle-out 15;
max-transfer-time-in 15;
max-transfer-time-out 15;
min-refresh-time 600;
};
tls "local-tls" {
key-file "/etc/letsencrypt/live/host.example.com/privkey.pem";
cert-file "/etc/letsencrypt/live/host.example.com/fullchain.pem";
};
Relevant logs
coredump stacktrace
PID: 10012 (named)
UID: 108 (bind)
GID: 114 (bind)
Signal: 6 (ABRT)
Timestamp: Fri 2023-12-22 12:02:21 CET (11h ago)
Command Line: /usr/sbin/named -f -u bind
Executable: /usr/sbin/named
Control Group: /system.slice/named.service
Unit: named.service
Slice: system.slice
Boot ID: e101cb0bcfd64908bae1f6a2c2f94629
Machine ID: 4c7202fa306346a8b86a8e269e5ab68e
Hostname: host.example.com
Storage: /var/lib/systemd/coredump/core.named.108.e101cb0bcfd64908bae1f6a2c2f94629.10012.1703242941000000.zst
Message: Process 10012 (named) of user 108 dumped core.
Stack trace of thread 10012:
#0 0x00007fcca8f760fc n/a (libc.so.6 + 0x8a0fc)
#1 0x00007fcca8f28472 raise (libc.so.6 + 0x3c472)
#2 0x00007fcca8f124b2 abort (libc.so.6 + 0x264b2)
#3 0x00005586c9cbba3a n/a (named + 0x18a3a)
#4 0x00007fcca99f430a isc_assertion_failed (libisc-9.19.19-1+0~20231220.107+debian11~1.gbpfc5ec0-Debian.so + 0x3630a)
#5 0x00007fcca9784b81 n/a (libns-9.19.19-1+0~20231220.107+debian11~1.gbpfc5ec0-Debian.so + 0x2eb81)
#6 0x00007fcca97825f5 n/a (libns-9.19.19-1+0~20231220.107+debian11~1.gbpfc5ec0-Debian.so + 0x2c5f5)
#7 0x00007fcca9784fd0 n/a (libns-9.19.19-1+0~20231220.107+debian11~1.gbpfc5ec0-Debian.so + 0x2efd0)
#8 0x00007fcca97825f5 n/a (libns-9.19.19-1+0~20231220.107+debian11~1.gbpfc5ec0-Debian.so + 0x2c5f5)
#9 0x00007fcca97825f5 n/a (libns-9.19.19-1+0~20231220.107+debian11~1.gbpfc5ec0-Debian.so + 0x2c5f5)
#10 0x00007fcca9783e09 ns__query_start (libns-9.19.19-1+0~20231220.107+debian11~1.gbpfc5ec0-Debian.so + 0x2de09)
#11 0x00007fcca978468e n/a (libns-9.19.19-1+0~20231220.107+debian11~1.gbpfc5ec0-Debian.so + 0x2e68e)
#12 0x00007fcca97693d7 ns_client_request (libns-9.19.19-1+0~20231220.107+debian11~1.gbpfc5ec0-Debian.so + 0x133d7)
#13 0x00007fcca99e88c0 n/a (libisc-9.19.19-1+0~20231220.107+debian11~1.gbpfc5ec0-Debian.so + 0x2a8c0)
#14 0x00007fcca99e7c9a n/a (libisc-9.19.19-1+0~20231220.107+debian11~1.gbpfc5ec0-Debian.so + 0x29c9a)
#15 0x00007fcca99e8b2a n/a (libisc-9.19.19-1+0~20231220.107+debian11~1.gbpfc5ec0-Debian.so + 0x2ab2a)
#16 0x00007fcca99dff5d isc__nm_readcb (libisc-9.19.19-1+0~20231220.107+debian11~1.gbpfc5ec0-Debian.so + 0x21f5d)
#17 0x00007fcca99ed2e4 isc__nm_tcp_read_cb (libisc-9.19.19-1+0~20231220.107+debian11~1.gbpfc5ec0-Debian.so + 0x2f2e4)
#18 0x00007fcca93af3c1 n/a (libuv.so.1 + 0x1a3c1)
#19 0x00007fcca93afe38 n/a (libuv.so.1 + 0x1ae38)
#20 0x00007fcca93b7ff5 n/a (libuv.so.1 + 0x22ff5)
#21 0x00007fcca93a49bc uv_run (libuv.so.1 + 0xf9bc)
#22 0x00007fcca9a0745c n/a (libisc-9.19.19-1+0~20231220.107+debian11~1.gbpfc5ec0-Debian.so + 0x4945c)
#23 0x00005586c9cbc967 main (named + 0x19967)
#24 0x00007fcca8f136ca n/a (libc.so.6 + 0x276ca)
#25 0x00007fcca8f13785 __libc_start_main (libc.so.6 + 0x27785)
#26 0x00005586c9cbd2ea _start (named + 0x1a2ea)
Stack trace of thread 10013:
#0 0x00007fcca8fecee9 syscall (libc.so.6 + 0x100ee9)
#1 0x00007fcca895bb94 n/a (liburcu-cds.so.8 + 0x2b94)
#2 0x00007fcca895bfb6 n/a (liburcu-cds.so.8 + 0x2fb6)
#3 0x00007fcca8f743ec n/a (libc.so.6 + 0x883ec)
#4 0x00007fcca8ff4a4c n/a (libc.so.6 + 0x108a4c)
Stack trace of thread 10014:
#0 0x00007fcca8f71156 n/a (libc.so.6 + 0x85156)
#1 0x00007fcca8f73818 pthread_cond_wait (libc.so.6 + 0x87818)
#2 0x00007fcca93b2299 uv_cond_wait (libuv.so.1 + 0x1d299)
#3 0x00007fcca939fe1d n/a (libuv.so.1 + 0xae1d)
#4 0x00007fcca8f743ec n/a (libc.so.6 + 0x883ec)
#5 0x00007fcca8ff4a4c n/a (libc.so.6 + 0x108a4c)
Stack trace of thread 10015:
#0 0x00007fcca8fecee9 syscall (libc.so.6 + 0x100ee9)
#1 0x00007fcca8a9f1bd n/a (liburcu.so.8 + 0x41bd)
#2 0x00007fcca8f743ec n/a (libc.so.6 + 0x883ec)
#3 0x00007fcca8ff4a4c n/a (libc.so.6 + 0x108a4c)
/proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel Xeon E312xx (Sandy Bridge)
stepping : 1
microcode : 0x1
cpu MHz : 2499.990
cache size : 16384 KB
physical id : 0
siblings : 1
core id : 0
cpu cores : 1
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx hypervisor lahf_lm cpuid_fault pti xsaveopt arat
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown
bogomips : 4999.98
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:
/proc/meminfo
MemTotal: 472412 kB
MemFree: 33984 kB
MemAvailable: 224056 kB
Buffers: 60 kB
Cached: 181344 kB
SwapCached: 23076 kB
Active: 169116 kB
Inactive: 184952 kB
Active(anon): 81520 kB
Inactive(anon): 91284 kB
Active(file): 87596 kB
Inactive(file): 93668 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 1048572 kB
SwapFree: 969448 kB
Dirty: 0 kB
Writeback: 0 kB
AnonPages: 156268 kB
Mapped: 37600 kB
Shmem: 140 kB
KReclaimable: 20896 kB
Slab: 47192 kB
SReclaimable: 20896 kB
SUnreclaim: 26296 kB
KernelStack: 1776 kB
PageTables: 3144 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 1284776 kB
Committed_AS: 245644 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 18016 kB
VmallocChunk: 0 kB
Percpu: 568 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
FileHugePages: 0 kB
FilePmdMapped: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 0 kB
DirectMap4k: 114544 kB
DirectMap2M: 409600 kB
Edited by Nicki Křížek