12/11/2023 0 Comments Burp suite directory brute forceParameter fuzzing, content discovery or directory brute forcing? HELP!īesides the already known options of content discovery and directory brute forcing, there is also an option to perform parameter fuzzing. Especially on slow computers this will be a huge hassle to complete and we do not want to frustrate ourselves any more than we need to. I do not recommend this and i recommend only doing targetted directory brute forcing attacks. In this case, we really need to make sure our wordlists are not too big because every single scan will be repeated and increase the scantime. Even if the tool we use does not allow us to do this, a simple command can be all that it takes to feed a list to our tool instead of a single URL. Some of these tools will allow us to check a whole list of URLs and do directory brute forcing on that list instead just checking one target at a time. There will be a video included in the course on what wordlists that i use. Make sure you pick a list that fits your target and if you can't find one then maybe you should make one. You can't have a good directory brute force if it runs for 6 years! So you might be wondering, okay what wordlist exactly do i use uncle? I say pick one but make it count! Runtime is going to be one of the determening factors of a succesful attack. We've talked about runtime several times before in this document and that has a reason. I recommend that you have a specialised wordlist for every type of content because ofcourse fuzzing for pictures will probably require a different wordlist than fuzzing for documents. When we fuzz for content discovery we can fuzz for several different things. If you are looking for image files for example, you might be looking for JPG files but you might also want to add PNG and GIF to the mix which will triple the runtime of our tools since it has to check every request three times. ![]() We can also do this recursively or non-recursively but whatever option we pick, the type of content we are looking for will also play a factor in the runtime of our tools. When we talk about content discovery we can either talk about adding content discovery to our methodology or only doing file discovery. Sometimes the crawler might follow a link and find even more links on that page, we can set it up to follow those links or just got 1 level deep. In recursive crawling we can also set the depth which will determine how deep the crawler will follow those links. Recursive scanning however will allow the crawler to follow these links that it finds. ![]() We want the crawler to only make the requests that we tell it to and if it find a link we want the crawler to ignore that link. In non recursive scanning we do not allow this crawler to follow any links at all. The crawler is the robot that will make the requests that we set it up to create based on our wordlists. It does not matter what we want to fuzz, whether it be directories, content or even vhosts, when we scan non recursively, we are referring to whether or not the crawler should follow the links that it finds. Attack strategies Non recursive vs recursive scanning The quality of your wordlist will determine the quality of your results but the same is true for the length of your wordlist determining the runtime of your attack. I bring this to your imagination because even though it's normal and logical, the same goes for automated scanners. You might also be able to image that if i ask you to check 10 directories that it would take you a lot less time than checking 100000 directories. ![]() This is something that we always do automated as trying to guess possibly millions of directories and check them manually can take quite a while as you might imagine. Whatever the case may be, we can approach this issue using several attack strategies. We know that there is a webserver running and we might even have access to certain pages like /login.php which is guarding some juicy loot or we might just see that there is an IIS server running and we want to explore it some more. When we talk about directory brute forcing we are in essence trying to guess the directories of our target's webserver.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |