Skipfish: Web application Security Scanner

Skipfish is one of the Web application security Reconnaissance tools that comes pre-installed in popular security-focused operating systems like Kali Linux and Parrot Security OS. Skipfish is used to prepare an interactive sitemap for the target using recursive crawl and dictionary-based probes. The resulting map provides output after being scanned by security checks.

Hello everyone. In this lab, I will equip you with the knowledge to use Skipfish and identify potential security holes in websites.

So let’s get started.

For this purpose, you must have 3 things to be ready:

  • First, you'll need Kali Linux installed and running on a virtual machine. This virtual environment isolates your scanning activities and safeguards your main computer. Make sure the virtual machine's network adapter is configured in a special mode called "Host-only" to allow communication between Kali and your target website.
  • Next, you'll need a website to scan. In this tutorial, we'll use a website called Metasploitable2, which is deliberately designed to be vulnerable for educational purposes. This provides a safe space to practice without harming a real website. Make sure the virtual machine's network adapter is configured in a special mode called "Host-only" to allow communication between Kali and your target website.
  • Next, we need a Steady Internet Connection. A stable internet connection is essential for the scanning process to communicate with the target website. First, ensure you have a reliable internet connection. This is crucial for the scanning process to interact with the target website. Access your Kali VM, navigate to settings, then to Network adapter, and confirm that the second adapter is connected to the NAT network.

Once three prerequisites are in place, you're prepared to scan for vulnerabilities on target websites. To begin, we'll exploit a website running on Metasploitable2

Before proceeding with the scanning process, it's crucial to identify the target's address. Open a terminal window in Kali Linux and execute the "netdiscover" command.

┌──(kali㉿kali)-[~]
└─$ sudo netdiscover -i eth1

This command acts like a digital radar, searching your network for devices. It will list the discovered devices along with their IP addresses.

We already know previously that Metasploitable2 has a web page running on Port 80. Launch a browser, and you can access its web interface by typing the IP address into your web browser's address bar.

Let’s first scan this webpage and list out the web vulnerabilities.

Skipfish can be found within the "Web Applications" menu in Kali Linux, usually under "Web Vulnerability Scanners."

When you launch Skipfish, it displays a command window with instructions for using its various functionalities.

$ skipfish -h
skipfish web application scanner - version 2.10b
Usage: skipfish [ options ... ] -W wordlist -o output_dir start_url [ start_url2 ... ]

Authentication and access options:

  -A user:pass      - use specified HTTP authentication credentials
  -F host=IP        - pretend that 'host' resolves to 'IP'
  -C name=val       - append a custom cookie to all requests
  -H name=val       - append a custom HTTP header to all requests
  -b (i|f|p)        - use headers consistent with MSIE / Firefox / iPhone
  -N                - do not accept any new cookies
  --auth-form url   - form authentication URL
  --auth-user user  - form authentication user
  --auth-pass pass  - form authentication password
  --auth-verify-url -  URL for in-session detection

Crawl scope options:

  -d max_depth     - maximum crawl tree depth (16)
  -c max_child     - maximum children to index per node (512)
  -x max_desc      - maximum descendants to index per branch (8192)
  -r r_limit       - max total number of requests to send (100000000)
  -p crawl%        - node and link crawl probability (100%)
  -q hex           - repeat probabilistic scan with given seed
  -I string        - only follow URLs matching 'string'
  -X string        - exclude URLs matching 'string'
  -K string        - do not fuzz parameters named 'string'
  -D domain        - crawl cross-site links to another domain
  -B domain        - trust, but do not crawl, another domain
  -Z               - do not descend into 5xx locations
  -O               - do not submit any forms
  -P               - do not parse HTML, etc, to find new links

Reporting options:

  -o dir          - write output to specified directory (required)
  -M              - log warnings about mixed content / non-SSL passwords
  -E              - log all HTTP/1.0 / HTTP/1.1 caching intent mismatches
  -U              - log all external URLs and e-mails seen
  -Q              - completely suppress duplicate nodes in reports
  -u              - be quiet, disable realtime progress stats
  -v              - enable runtime logging (to stderr)

Dictionary management options:

  -W wordlist     - use a specified read-write wordlist (required)
  -S wordlist     - load a supplemental read-only wordlist
  -L              - do not auto-learn new keywords for the site
  -Y              - do not fuzz extensions in directory brute-force
  -R age          - purge words hit more than 'age' scans ago
  -T name=val     - add new form auto-fill rule
  -G max_guess    - maximum number of keyword guesses to keep (256)

  -z sigfile      - load signatures from this file

Performance settings:

  -g max_conn     - max simultaneous TCP connections, global (40)
  -m host_conn    - max simultaneous connections, per target IP (10)
  -f max_fail     - max number of consecutive HTTP errors (100)
  -t req_tmout    - total request response timeout (20 s)
  -w rw_tmout     - individual network I/O timeout (10 s)
  -i idle_tmout   - timeout on idle HTTP connections (10 s)
  -s s_limit      - response size limit (400000 B)
  -e              - do not keep binary responses for reporting

Other settings:

  -l max_req      - max requests per second (0.000000)
  -k duration     - stop scanning after the given duration h:m:s
  --config file   - load the specified configuration file

Send comments and complaints to <[email protected]>.
┌──(kali㉿kali)-[~]
└─$ 

To run Skipfish against the target website, open a new terminal and enter Skipfish, select your output directory using -o followed by the location, and then set the target website.

┌──(kali㉿kali)-[~]
└─$ skipfish -o /home/kali/skipfish_report http://192.168.56.104

If you want to use your preferred wordlist then select your wordlist using the -W option followed by the location of the wordlist.

  -W wordlist     - use a specified read-write wordlist (required)
  -S wordlist     - load a supplemental read-only wordlist

Skipfish has its own built-in arsenal of tools for finding vulnerabilities. However, you can also use your own custom wordlist if you have specific areas you want to investigate. Custom wordlists can be created to target particular weaknesses or exploit specific vulnerabilities. There are many resources online that can guide you through creating your own wordlists.

If everything is set up correctly, Skipfish will display a message indicating the scan will begin shortly, usually around 60 seconds, or when you press any key.

You can press the Spacebar key to see detailed information about the scan as it progresses, or you can simply wait for it to start automatically.

Scans can take varying amounts of time to complete, ranging from a quick 30 seconds to a few hours depending on the size and complexity of the target website. If you need to stop the scan early for any reason, you can press Ctrl+C.

Once the scan is finished or if you end it early, Skipfish will generate a collection of files in the folder you specified earlier.

Open the file named "index.html" in a web browser. This file displays the scan results in a user-friendly format. 

You can explore the report using the dropdown menus to navigate through the different vulnerabilities discovered by Skipfish.

Let’s do another website but this time a live one from the Internet. This time we will scan a deliberately vulnerable website called, testphp.vulnweb.com

Be aware that this website is intentionally vulnerable for educational purposes. This website is specifically designed to be scanned for educational purposes.

The process is almost identical to what we did before. Just replace the Metasploitable2 address in your Skipfish command with the URL.

┌──(kali㉿kali)-[~]
└─$ skipfish -o /home/kali/skipfish_report http://testphp.vulnweb.com/

This time, we'll observe how long the scan takes. Scans can vary depending on the website's size and complexity, so it's difficult to predict an exact time. Be patient and let Skipfish do its job.

As before, navigate to the output folder you specified in the command and open the "index.html" file in your web browser. 

This will display the scan results for http://testphp.vulnweb.com/, highlighting any potential vulnerabilities Skipfish discovered. Regarding the scan duration, it took 41 minutes and 57 seconds to complete on my PC.

This is a simplified explanation. There's a lot more to web security than this. If you have any questions or get stuck, feel free to leave a comment!

Post a Comment

0 Comments
* Please Don't Spam Here. All the Comments are Reviewed by Admin.

#buttons=(Ok, Go it!) #days=(20)

Our website uses cookies to enhance your experience. Learn More
Ok, Go it!