-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Added "-g" option for maximum number of user-specified tokens #519
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Sometimes I've needed to make use of bigger dictionaries (with more than 200 items). Until now, the only option was to modify MAX_DET_EXTRAS in config.h.
|
Thanks for the pr. |
|
I thought about this and I propose something different. the extras dictionary grows to as many entries as which are to be added, no limit. IMHO it doesnt make sense to have a limit. |
|
As far as I understand (@antonio-morales correct me if I'm wrong), you can add an infinite amount of dict entries, but if you pass this threshold of currently 200, they are sometimes skipped, probabilistically /* Skip extras probabilistically if afl->extras_cnt > MAX_DET_EXTRAS. Also
skip them if there's no room to insert the payload, if the token
is redundant, or if its entire span has no bytes set in the effector
map. */
if ((afl->extras_cnt > MAX_DET_EXTRAS &&
rand_below(afl, afl->extras_cnt) >= MAX_DET_EXTRAS) ||
...This seems like an optimization so other, non-dict things also get a chance. |
|
@domenukk @vanhauser-thc Yes, I have used bigger dictionaries a few occasions in the past (modifying MAX_DET_EXTRAS constant and rebuilding the code). I haven't' benchmarked it but, if I'm not mistaken, it's a linear increase in time or O(n).
I think that dictionaries with more than 200 entries are useful, for example, if you're fuzzing complex interpreters/compilers. The variable AFL_MAX_DET_EXRAS looks like a good idea to me. And it allows you to save "-g" flag for a future feature 😄 And I've kept the probabilistic functionality because I think that it can be interesting in certain cases or scenarios.
|
|
Can you take a look if 1301552 is a good solution to your problem? |
|
@domenukk this means the user has to make an extra effort in first needing to check how many dictionary entries there are to be loaded. With lto autodict this is additionally cumbersome. We should simply load all supplied. Expect the user to know what he/she is doing. |
|
Ah you mean like a |
|
Just as a clarification, the important loop here: AFLplusplus/src/afl-fuzz-one.c Line 1510 in 572944d
doesn't just skip random entries always, but skips random entries for each fuzz_one. They will get picked up eventually.This makes a lot of sense to me - if I have more than 200 dict entries, I don't want each fuzz_one to pick all of them up for each mutation, I want afl to pick random ones.I would argue using this env variable should only be used very occasionally - by people who know what they are doing. Most of the time this will deteriorate fuzzing. |
|
no I mean currently there is a limit on entries that afl-fuzz reads in - and aborts otherwise. |
|
Ah @vanhauser-thc gotcha. How about #523 ? |
|
If #523 is merged is this issue/feature done or is there anything still open? |
|
@domenukk @vanhauser-thc I think 1301552 and #523 work well, so this issue can be closed now 👍 |
Sometimes I've needed to make use of bigger dictionaries (with more than 200 items). Until now, the only option was to modify MAX_DET_EXTRAS in config.h.
I think it would be a good idea to be able to modify this option at runtime.