KingSkrupellos

Google SkipFish for Linux Web Security TooL Codes

Dec 4th, 2017
116
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 6.48 KB | None | 0 0
  1. [video=youtube]https://www.youtube.com/watch?v=THw5DYyM1Fg[/video]
  2.  
  3. [video=youtube]https://www.youtube.com/watch?v=4P7fBa812w4[/video]
  4.  
  5. [video=youtube]https://www.youtube.com/watch?v=82HMtGc61d8[/video]
  6.  
  7. https://www.cyberizm.org/cyberizm-google-skipfish-for-linux-web-security-tool-codes.html
  8.  
  9. [img]http://i.hizliresim.com/4PaXXA.png[/img]
  10.  
  11. [img]http://i.hizliresim.com/O0omm4.png[/img]
  12.  
  13. [img]http://i.hizliresim.com/JaQmmJ.png[/img]
  14.  
  15. Skipfish adlı açık kaynak kodlu olarak geliştirilen bu güvenlik aracı, uygulamaları tarıyor ve olası güvenlik sorunlarını daha en baştan tespit etmeyi hedefliyor.
  16.  
  17. Linux'un komut satırında çalışan bu araç, Nmap ve Nessus benzeri bir özelliklere sahip. Kodları tek tek inceleyen Skipfish adlı araç, deneysel olarak kodlarda cross site scripting olarak bilinen XSS ve SQL veya XML injection açıkları olup olmadığını kontrol ediyor.
  18.  
  19. Skipfish genelde hızlı tarama ve raporlama sistemiyle ön plana çıkan bir scanner aracıdır.
  20.  
  21. İngilizce Tanımı =>
  22.  
  23. Skipfish is an active web application security reconnaissance tool. It prepares an interactive sitemap for a site by carrying out a recursive crawl and dictionary tools. Written in C with a custom HTTP stack, it is high performance, easy to use and reliable.
  24.  
  25. Direk Google İndirme Linki =>
  26.  
  27. [hide][code]https://code.google.com/archive/p/skipfish/[/code][/hide]
  28.  
  29. GitHub SkipFish Ana Dosyaları
  30.  
  31. [hide][code]https://github.com/spinkham/skipfish[/code][/hide]
  32.  
  33. Skipfish – Fully automated, active web application security reconnaissance tool
  34.  
  35. Kullanım Kılavuzu ve SkipFish Komutları
  36.  
  37. [code]root@kali:~# skipfish -h
  38. CyBeRiZM skipfish web application scanner - version 2.10b
  39. Usage: skipfish [ options ... ] -W wordlist -o output_dir start_url [ start_url2 ... ]
  40.  
  41. Authentication and access options:
  42.  
  43. -A user:pass - use specified HTTP authentication credentials
  44. -F host=IP - pretend that 'host' resolves to 'IP'
  45. -C name=val - append a custom cookie to all requests
  46. -H name=val - append a custom HTTP header to all requests
  47. -b (i|f|p) - use headers consistent with MSIE / Firefox / iPhone
  48. -N - do not accept any new cookies
  49. --auth-form url - form authentication URL
  50. --auth-user user - form authentication user
  51. --auth-pass pass - form authentication password
  52. --auth-verify-url - URL for in-session detection
  53.  
  54. Crawl scope options:
  55.  
  56. -d max_depth - maximum crawl tree depth (16)
  57. -c max_child - maximum children to index per node (512)
  58. -x max_desc - maximum descendants to index per branch (8192)
  59. -r r_limit - max total number of requests to send (100000000)
  60. -p crawl% - node and link crawl probability (100%)
  61. -q hex - repeat probabilistic scan with given seed
  62. -I string - only follow URLs matching 'string'
  63. -X string - exclude URLs matching 'string'
  64. -K string - do not fuzz parameters named 'string'
  65. -D domain - crawl cross-site links to another domain
  66. -B domain - trust, but do not crawl, another domain
  67. -Z - do not descend into 5xx locations
  68. -O - do not submit any forms
  69. -P - do not parse HTML, etc, to find new links
  70.  
  71. Reporting options:
  72.  
  73. -o dir - write output to specified directory (required)
  74. -M - log warnings about mixed content / non-SSL passwords
  75. -E - log all HTTP/1.0 / HTTP/1.1 caching intent mismatches
  76. -U - log all external URLs and e-mails seen
  77. -Q - completely suppress duplicate nodes in reports
  78. -u - be quiet, disable realtime progress stats
  79. -v - enable runtime logging (to stderr)
  80.  
  81. Dictionary management options:
  82.  
  83. -W wordlist - use a specified read-write wordlist (required)
  84. -S wordlist - load a supplemental read-only wordlist
  85. -L - do not auto-learn new keywords for the site
  86. -Y - do not fuzz extensions in directory brute-force
  87. -R age - purge words hit more than 'age' scans ago
  88. -T name=val - add new form auto-fill rule
  89. -G max_guess - maximum number of keyword guesses to keep (256)
  90.  
  91. -z sigfile - load signatures from this file
  92.  
  93. Performance settings:
  94.  
  95. -g max_conn - max simultaneous TCP connections, global (40)
  96. -m host_conn - max simultaneous connections, per target IP (10)
  97. -f max_fail - max number of consecutive HTTP errors (100)
  98. -t req_tmout - total request response timeout (20 s)
  99. -w rw_tmout - individual network I/O timeout (10 s)
  100. -i idle_tmout - timeout on idle HTTP connections (10 s)
  101. -s s_limit - response size limit (400000 B)
  102. -e - do not keep binary responses for reporting
  103.  
  104. Other settings:
  105.  
  106. -l max_req - max requests per second (0.000000)
  107. -k duration - stop scanning after the given duration h:m:s
  108. --config file - load the specified configuration file
  109.  
  110. Send comments and complaints to <machscher1turk@gmail.com>.[/code]
  111.  
  112. SkipFish Kullanım Örneği
  113.  
  114. Using the given directory for output (-o 202) , scan the web application URL (http://192.168.1.202/wordpress):
  115.  
  116. [code]root@kali:~# skipfish -o 202 http://192.168.1.202/wordpress
  117.  
  118. CyBeRiZM skipfish version 2.10b by machscher1turk@gmail.com
  119.  
  120. - 192.168.1.202 -
  121.  
  122. Scan statistics:
  123.  
  124. Scan time : 0:00:05.849
  125. HTTP requests : 2841 (485.6/s), 1601 kB in, 563 kB out (370.2 kB/s)
  126. Compression : 802 kB in, 1255 kB out (22.0% gain)
  127. HTTP faults : 0 net errors, 0 proto errors, 0 retried, 0 drops
  128. TCP handshakes : 46 total (61.8 req/conn)
  129. TCP faults : 0 failures, 0 timeouts, 16 purged
  130. External links : 512 skipped
  131. Reqs pending : 0
  132.  
  133. Database statistics:
  134.  
  135. Pivots : 13 total, 12 done (92.31%)
  136. In progress : 0 pending, 0 init, 0 attacks, 1 dict
  137. Missing nodes : 0 spotted
  138. Node types : 1 serv, 4 dir, 6 file, 0 pinfo, 0 unkn, 2 par, 0 val
  139. Issues found : 10 info, 0 warn, 0 low, 8 medium, 0 high impact
  140. Dict size : 20 words (20 new), 1 extensions, 202 candidates
  141. Signatures : 77 total
  142.  
  143. [+] Copying static resources...
  144. [+] Sorting and annotating crawl nodes: 13
  145. [+] Looking for duplicate entries: 13
  146. [+] Counting unique nodes: 11
  147. [+] Saving pivot data for third-party tools...
  148. [+] Writing scan description...
  149. [+] Writing crawl tree: 13
  150. [+] Generating summary views...
  151. [+] Report saved to '202/index.html' [0x7054c49d].
  152. [+] This was a great day for science![/code]
Add Comment
Please, Sign In to add comment