Resolve "Integrate clusterfuzzlite/libfuzzer/oss-fuzz in the main core Kea repo"
requested to merge 3605-integrate-clusterfuzzlite-libfuzzer-oss-fuzz-in-the-main-core-kea-repo into master
Closes #3605 (closed).
-
4b5332fe Prepare existing code for fuzzing - Separate ENABLE_AFL into ENABLE_FUZZING and HAVE_AFL.
- Add the --disable-unicode flag required in the oss-fuzz container.
- Add checking of support for C++17.
- Make Kea compile with afl++.
- Rotate ports in
getServerPort()
functions under an env var. - Fix some destruction issues that would result in crashes when fuzzing.
- Add some checks in the UnixControlClient that prevent some crashes when fuzzing.
- Add
isc::util::isSocket()
function. - Change
isc::util::file::Path
to not append a trailing slash to allow chained calls ofparentPath()
. - Add
isc::util::file::TemporaryDirectory
useful when fuzzing.
-
fa39a1f4 Integrate a new fuzzing solution in Kea The solution is based on clusterfuzzlite, libfuzzer, and oss-fuzz technologies.
- Add the .clusterfuzzlite directory.
- Add the fuzz CI stage and fuzzing CI jobs.
- Add the fuzzing targets in the fuzz directory.
- Document fuzzing in doxygen.
Here's a todo list that I intended to turn into separate tickets after this MR gets merged, but if the reviewer insists, we could do them as part of this one as well.
- Find a way to fetch results from fuzz-batch job and integrate them into the jenkins tests report. - We might need to wait until another crash happens, or intentionally cause a crash in order to have artifacts to work with.
- Remove the requirement for input to be in hex format in packet-fuzzer.
- See why fuzz-build job is not triggered.
- Fix fuzz-coverage job. Needs clang++19.
- Fix fuzz-prune job. Seems like external issue.
- Create a simple way to replicate fuzzing errors locally, or document how to do it. - There is a run-locally.sh script, but it's improvised. The proper way to run locally according to oss-fuzz docs is to upload instructions on how to build Kea (see
.clsuterfuzzlite
) to the oss-fuzz repo, run their python script, which then downloads docker images that they build regularly (probably with updated seed corpus), and runs on those.
Edited by Andrei Pavel