Pastebin launched a little side project called VERYVIRAL.com, check it out ;-) Want more features on Pastebin? Sign Up, it's FREE!
Guest

Untitled

By: a guest on Jul 20th, 2013  |  syntax: None  |  size: 1.77 KB  |  views: 27  |  expires: Never
download  |  raw  |  embed  |  report abuse  |  print
Text below is selected. Please press Ctrl+C to copy to your clipboard. (⌘+C on Mac)
  1. # Crawljax Command-line
  2. This is the Command-line distribution of Crawljax. The project is assembled in a ZIP file containing the jar that you can run to execute the crawler.
  3.  
  4.        
  5. Unzip the zip and in the resulting folder you can run Crawljax as follows:
  6.  
  7. ```
  8. usage: java -jar crawljax-cli-version.jar theUrl theOutputDir
  9.    -a,--crawlHiddenAnchors     Crawl anchors even if they are not visible in the
  10.                                browser.
  11.    -b,--browser <arg>          browser type: firefox, ie, chrome, remote,
  12.                                htmlunit, android, iphone. Default is Firefox
  13.    -click <arg>                a comma separated list of HTML tags that should
  14.                                be clicked. Default is A and BUTTON
  15.    -d,--depth <arg>            crawl depth level. Default is 2
  16.    -h,--help                   print this message
  17.    -log <arg>                  Log to this file instead of the console
  18.    -o,--override               Override the output directory if non-empty
  19.    -p,--parallel <arg>         Number of browsers to use for crawling. Default
  20.                                is 1
  21.    -s,--maxstates <arg>        max number of states to crawl. Default is 0
  22.                                (unlimited)
  23.    -t,--timeout <arg>          Specify the maximum crawl time in minutes
  24.    -v,--verbose                Be extra verbose
  25.    -version                    print the version information and exit
  26.    -waitAfterEvent <arg>       the time to wait after an event has been fired in
  27.                                milliseconds. Default is 500
  28.    -waitAfterReload <arg>      the time to wait after an URL has been loaded in
  29.                                milliseconds. Default is 500
  30. ```
  31.  
  32. The output folder will containt the output of the Crawl overview plugin.
clone this paste RAW Paste Data