Download List

專案描述

SWEC is a program that automates testing of
dynamic Web sites. It parses each HTML file it
finds for links, and if those links are within the
site specified, it will check that page as well.
In addition to parsing and locating links, it will
also parse the pages looking for known errors and
report those. It will report if a page cannot be
read (by either returning a 404, 500, or similar).

System Requirements

System requirement is not defined
Information regarding Project Releases and Project Resources. Note that the information here is a quote from Freecode.com page, and the downloads themselves may not be hosted on OSDN.

2009-10-01 00:50
0.4

A new version of the test definition format (SDFv2) is now used, which improves both speed and flexibility of error checks. The old format is deprecated, but will continue to be supported until SWEC 0.6. Seed/baseurl parsing was made smarter. The --checksub parameter was added, which makes SWEC descend into subdomains. Various bugfixes, code cleanups, and minor changes were made.
標籤: Major feature enhancements, Code cleanup, Minor bugfixes

2009-07-24 18:58
0.3

SWEC now returns nonzero if a test fails. The dependency on HTML::LinkExtractor was removed (it is now optional). The user-agent string and the final summary were cleaned up. The --nohead option was added, which tells SWEC to skip performing HEAD requests and go straight to GET. The --keepgoing option was added, which tells SWEC to parse a document for URLs even if it contained errors. Various other bugfixes and minor enhancements were made.
標籤: Major feature enhancements, Bug fixes

2009-03-17 05:48
0.2.1

Minor fixes and enhancements to various tests. Now retries if the server resets the connection before the test is done. Some of the output is easier to read. This release adds --lwphead and --lwpget, which are equivalent to the LWP HEAD and GET commands but with added support for cookies.
標籤: Stable, Minor feature enhancements

2009-01-28 19:00
0.2

Many new tests were added. Internal error codes
are generated for HTTP errors so that you may
exclude those tests. Better assumptions are made
when you only supply a single URL on the
commandline. Binary files are now skipped based
upon their returned HTTP content type, and not
just their extension. A HEAD request is now
performed before doing the GET request, so that
skipped files are not downloaded needlessly. URL
seeds will now be checked in the order they appear
on the commandline.
標籤: Initial freshmeat announcement

Project Resources