In this blog post we will walk through how throwing 160 distributed CPUs at a fuzzing target that takes initially one year of CPU time can shorten the fuzzing time substantially. Also we will share test-cases where throwing more CPUs doesn’t necessary help.
CVE-2017-3732, a carry propagation bug in OpenSSL BN-mod_exp. This was orginally discovered by OSS-FUZZ using in-tree bignum fuzz target. It took at least one CPU year to find this bug for the first time.taken from openssl-1.1.0c test target by google
We took this target and threw 160 distributed CPUs using the Fuzzit platform. A good best-practice in distributed fuzzing is to first fuzz on a single CPU or otherwise all workers will find the same paths and share similar generated test-cases which will slow the workers instead of speeding it up and helping each other.
This is exactly what we did. We compiled the target and uploaded it without seed corpus to the Fuzzit platform and created a fuzzing job which ran for 12 hours + a merge cron job that merged the corpus every hour. The exact time to fuzz a target before scaling-up depends on the target. This particular target was slow from the beginning so it was enough to fuzz it for about 12 hours before creating a distributed fuzzing job with 160 CPUs.
It took less then an hour for the first worker to find the bug and an hour for the second worker to find the same bug
Things that didn’t work out – CVE-2018-5146 (VORBIS)
We tried to distribute the vorbis target provided by Google’s fuzzer-test-suite (which per their documentation takes several hundred CPU hours) but we were unable to speed up substantially the target with 80 CPUs, probably because we didn’t fuzz the initial corpus long enough and it slowed down all the workers. This is an important note that large scale distributed fuzzing usually only makes sense with a strong seed corpus or after long (enough) fuzzing which produces a good corpus.
A bit on the internal of how distributed fuzzing works on the Fuzzit platform:
For every fuzz target with (N) CPUs, we create (N) workers that run the same fuzzer and share their corpus between workers. Sharing the corpus between the workers helps them learn from each other and once one worker finds a new path, the rest can enjoy it. Every few hours we also run automatically a merge job which merges the shared corpus that can help speed up the workers. Not doing so might bloat the corpus to big sizes which might slow down the workers substantially.
The Fuzzit platform is not only intended for security researchers but it is also built for developers in mind to be able to integrate Continuous Fuzzing into their CI/CD pipeline or what we like to call CI/CF :).
Register for our Alpha Testers Program! And stay tuned.