WWW::Spyder is a web robot/spider. It can be set to crawl perpetually or be given a range of exit conditions (Kbytes retrieved, pages retrieved, time run, etc). It gives access to the plain text, the HTML, and all the links on each page while it crawls. It can order them in priority if given a set of terms to search for. It's pure Perl and use's the following: strict warnings overload Carp; HTML::Parser 3 LWP::UserAgent HTTP::Cookies URI::URL HTML::Entities POSIX::nice Digest::MD5::md5_base64 It's still somewhere b/t alpha and beta. Pretty useful but not 100% dependable quite yet. Thanks for looking! -asdlfkjaslfsdjkley