#! /usr/bin/env python

"""Web tree checker.

This utility is handy to check a subweb of the world-wide web for
errors.  A subweb is specified by giving one or more ``root URLs''; a
page belongs to the subweb if one of the root URLs is an initial
prefix of it.

File URL extension:

In order to easy the checking of subwebs via the local file system,
the interpretation of ``file:'' URLs is extended to mimic the behavior
of your average HTTP daemon: if a directory pathname is given, the
file index.html in that directory is returned if it exists, otherwise
a directory listing is returned.  Now, you can point webchecker to the
document tree in the local file system of your HTTP daemon, and have
most of it checked.  In fact the default works this way if your local
web tree is located at /var/www, which is the default for Debian
GNU/Linux. Other systems use /usr/local/etc/httpd/htdocs (the default for
the NCSA HTTP daemon and probably others).

Report printed:

When done, it reports pages with bad links within the subweb.  When
interrupted, it reports for the pages that it has checked already.

In verbose mode, additional messages are printed during the
information gathering phase.  By default, it prints a summary of its
work status every 50 URLs (adjustable with the -r option), and it
reports errors as they are encountered.  Use the -q option to disable
this output.

Checkpoint feature:

Whether interrupted or not, it dumps its state (a Python pickle) to a
checkpoint file and the -R option allows it to restart from the
checkpoint (assuming that the pages on the subweb that were already
processed haven't changed).  Even when it has run till completion, -R
can still be useful -- it will print the reports again, and -Rq prints
the errors only.  In this case, the checkpoint file is not written
again.  The checkpoint file can be set with the -d option.

The checkpoint file is written as a Python pickle.  Remember that
Python's pickle module is currently quite slow.  Give it the time it
needs to load and save the checkpoint file.  When interrupted while
writing the checkpoint file, the old checkpoint file is not
overwritten, but all work done in the current run is lost.


- You may find the (Tk-based) GUI version easier to use.  See

- Webchecker honors the "robots.txt" convention.  Thanks to Skip
Montanaro for his module (included in this directory)!
The agent name is hardwired to "webchecker".  URLs that are disallowed
by the robots.txt file are reported as external URLs.

- Because the SGML parser is a bit slow, very large SGML files are
skipped.  The size limit can be set with the -m option.

- When the server or protocol does not tell us a file's type, we guess
it based on the URL's suffix.  The module (also in this
directory) has a built-in table mapping most currently known suffixes,
and in addition attempts to read the mime.types configuration files in
the default locations of Netscape and the NCSA HTTP daemon.

- We follows links indicated by <A>, <FRAME> and <IMG> tags.  We also
honor the <BASE> tag.

- Checking external links is now done by default; use -x to *disable*
this feature.  External links are now checked during normal
processing.  (XXX The status of a checked link could be categorized
better.  Later...)

Usage: [option] ... [rooturl] ...


-R        -- restart from checkpoint file
-d file   -- checkpoint filename (default %(DUMPFILE)s)
-m bytes  -- skip HTML pages larger than this size (default %(MAXPAGE)d)
-n        -- reports only, no checking (use with -R)
-q        -- quiet operation (also suppresses external links report)
-r number -- number of links processed per round (default %(ROUNDSIZE)d)
-v        -- verbose operation; repeating -v will increase verbosity
-x        -- don't check external links (these are often slow to check)


rooturl   -- URL to start checking
             (default %(DEFROOT)s)


__version__ = "$Revision: 1.18 $"

import sys
import os
from types import *
import string
import StringIO
import getopt
import pickle

import urllib
import urlparse
import sgmllib

import mimetypes
import robotparser

# Extract real version number if necessary
if __version__[0] == '$':
    _v = string.split(__version__)
    if len(_v) == 3:
        __version__ = _v[1]

# Tunable parameters
DEFROOT = "file:/var/www/"   # Default root URL
CHECKEXT = 1                            # Check external references (1 deep)
VERBOSE = 1                             # Verbosity level (0-3)
MAXPAGE = 150000                        # Ignore files bigger than this
ROUNDSIZE = 50                          # Number of links processed per round
DUMPFILE = "@webchecker.pickle"         # Pickled checkpoint
AGENTNAME = "webchecker"                # Agent name for robots.txt parser

# Global variables

def main():
    checkext = CHECKEXT
    verbose = VERBOSE
    maxpage = MAXPAGE
    roundsize = ROUNDSIZE
    dumpfile = DUMPFILE
    restart = 0
    norun = 0

        opts, args = getopt.getopt(sys.argv[1:], 'Rd:m:nqr:vx')
    except getopt.error, msg:
        sys.stdout = sys.stderr
        print msg
        print __doc__%globals()
    for o, a in opts:
        if o == '-R':
            restart = 1
        if o == '-d':
            dumpfile = a
        if o == '-m':
            maxpage = string.atoi(a)
        if o == '-n':
            norun = 1
        if o == '-q':
            verbose = 0
        if o == '-r':
            roundsize = string.atoi(a)
        if o == '-v':
            verbose = verbose + 1
        if o == '-x':
            checkext = not checkext

    if verbose > 0:
        print AGENTNAME, "version", __version__

    if restart:
        c = load_pickle(dumpfile=dumpfile, verbose=verbose)
        c = Checker()

    c.setflags(checkext=checkext, verbose=verbose,
               maxpage=maxpage, roundsize=roundsize)

    if not restart and not args:

    for arg in args:


        if not norun:
            except KeyboardInterrupt:
                if verbose > 0:
                    print "[run interrupted]"

        except KeyboardInterrupt:
            if verbose > 0:
                print "[report interrupted]"

        if c.save_pickle(dumpfile):
            if dumpfile == DUMPFILE:
                print "Use ``%s -R'' to restart." % sys.argv[0]
                print "Use ``%s -R -d %s'' to restart." % (sys.argv[0],

def load_pickle(dumpfile=DUMPFILE, verbose=VERBOSE):
    if verbose > 0:
        print "Loading checkpoint from %s ..." % dumpfile
    f = open(dumpfile, "rb")
    c = pickle.load(f)
    if verbose > 0:
        print "Done."
        print "Root:", string.join(c.roots, "\n      ")
    return c

class Checker:

    checkext = CHECKEXT
    verbose = VERBOSE
    maxpage = MAXPAGE
    roundsize = ROUNDSIZE

    validflags = tuple(dir())

    def __init__(self):

    def setflags(self, **kw):
        for key in kw.keys():
            if key not in self.validflags:
                raise NameError, "invalid keyword argument: %s" % str(key)
        for key, value in kw.items():
            setattr(self, key, value)

    def reset(self):
        self.roots = []
        self.todo = {}
        self.done = {}
        self.bad = {}
        self.round = 0
        # The following are not pickled:
        self.robots = {}
        self.errors = {}
        self.urlopener = MyURLopener()
        self.changed = 0
    def note(self, level, format, *args):
        if self.verbose > level:
            if args:
                format = format%args
    def message(self, format, *args):
        if args:
            format = format%args
        print format 

    def __getstate__(self):
        return (self.roots, self.todo, self.done, self.bad, self.round)

    def __setstate__(self, state):
        (self.roots, self.todo, self.done, self.bad, self.round) = state
        for root in self.roots:
        for url in self.bad.keys():

    def addroot(self, root):
        if root not in self.roots:
            troot = root
            scheme, netloc, path, params, query, fragment = \
            i = string.rfind(path, "/") + 1
            if 0 < i < len(path):
                path = path[:i]
                troot = urlparse.urlunparse((scheme, netloc, path,
                                             params, query, fragment))
            self.newlink(root, ("<root>", root))

    def addrobot(self, root):
        root = urlparse.urljoin(root, "/")
        if self.robots.has_key(root): return
        url = urlparse.urljoin(root, "/robots.txt")
        self.robots[root] = rp = robotparser.RobotFileParser()
        self.note(2, "Parsing %s", url)
        rp.debug = self.verbose > 3
        except IOError, msg:
            self.note(1, "I/O error parsing %s: %s", url, msg)

    def run(self):
        while self.todo:
            self.round = self.round + 1
            self.note(0, "\nRound %d (%s)\n", self.round, self.status())
            urls = self.todo.keys()
            del urls[self.roundsize:]
            for url in urls:

    def status(self):
        return "%d total, %d to do, %d done, %d bad" % (
            len(self.todo), len(self.done),

    def report(self):
        if not self.todo: s = "Final"
        else: s = "Interim"
        self.message("%s Report (%s)", s, self.status())

    def report_errors(self):
        if not self.bad:
            self.message("\nNo errors")
        self.message("\nError Report:")
        sources = self.errors.keys()
        for source in sources:
            triples = self.errors[source]
            if len(triples) > 1:
                self.message("%d Errors in %s", len(triples), source)
                self.message("Error in %s", source)
            for url, rawlink, msg in triples:
                if rawlink != url: s = " (%s)" % rawlink
                else: s = ""
                self.message("  HREF %s%s\n    msg %s", url, s, msg)

    def dopage(self, url):
        if self.verbose > 1:
            if self.verbose > 2:
      "Check ", url, "  from", self.todo[url])
                self.message("Check %s", url)
        page = self.getpage(url)
        if page:
            for info in page.getlinkinfos():
                link, rawlink = info
                origin = url, rawlink
                self.newlink(link, origin)

    def newlink(self, url, origin):
        if self.done.has_key(url):
            self.newdonelink(url, origin)
            self.newtodolink(url, origin)

    def newdonelink(self, url, origin):
        self.note(3, "  Done link %s", url)

    def newtodolink(self, url, origin):
        if self.todo.has_key(url):
            self.note(3, "  Seen todo link %s", url)
            self.todo[url] = [origin]
            self.note(3, "  New todo link %s", url)

    def markdone(self, url):
        self.done[url] = self.todo[url]
        del self.todo[url]
        self.changed = 1

    def inroots(self, url):
        for root in self.roots:
            if url[:len(root)] == root:
                return self.isallowed(root, url)
        return 0
    def isallowed(self, root, url):
        root = urlparse.urljoin(root, "/")
        return self.robots[root].can_fetch(AGENTNAME, url)

    def getpage(self, url):
        if url[:7] == 'mailto:' or url[:5] == 'news:':
            self.note(1, " Not checking mailto/news URL")
            return None
        isint = self.inroots(url)
        if not isint:
            if not self.checkext:
                self.note(1, " Not checking ext link")
                return None
            f = self.openpage(url)
            if f:
            return None
        text, nurl = self.readhtml(url)
        if nurl != url:
            self.note(1, " Redirected to %s", nurl)
            url = nurl
        if text:
            return Page(text, url, maxpage=self.maxpage, checker=self)

    def readhtml(self, url):
        text = None
        f, url = self.openhtml(url)
        if f:
            text =
        return text, url

    def openhtml(self, url):
        f = self.openpage(url)
        if f:
            url = f.geturl()
            info =
            if not self.checkforhtml(info, url):
                f = None
        return f, url

    def openpage(self, url):
        except IOError, msg:
            msg = self.sanitize(msg)
            self.note(0, "Error %s", msg)
            if self.verbose > 0:
      " HREF ", url, "  from", self.todo[url])
            self.setbad(url, msg)
            return None

    def checkforhtml(self, info, url):
        if info.has_key('content-type'):
            ctype = string.lower(info['content-type'])
            if url[-1:] == "/":
                return 1
            ctype, encoding = mimetypes.guess_type(url)
        if ctype == 'text/html':
            return 1
            self.note(1, " Not HTML, mime type %s", ctype)
            return 0

    def setgood(self, url):
        if self.bad.has_key(url):
            del self.bad[url]
            self.changed = 1
            self.note(0, "(Clear previously seen error)")

    def setbad(self, url, msg):
        if self.bad.has_key(url) and self.bad[url] == msg:
            self.note(0, "(Seen this error before)")
        self.bad[url] = msg
        self.changed = 1
    def markerror(self, url):
            origins = self.todo[url]
        except KeyError:
            origins = self.done[url]
        for source, rawlink in origins:
            triple = url, rawlink, self.bad[url]
            self.seterror(source, triple)

    def seterror(self, url, triple):
        except KeyError:
            self.errors[url] = [triple]

    # The following used to be toplevel functions; they have been
    # changed into methods so they can be overridden in subclasses.

    def show(self, p1, link, p2, origins):
        self.message("%s %s", p1, link)
        i = 0
        for source, rawlink in origins:
            i = i+1
            if i == 2:
                p2 = ' '*len(p2)
            if rawlink != link: s = " (%s)" % rawlink
            else: s = ""
            self.message("%s %s%s", p2, source, s)

    def sanitize(self, msg):
        if isinstance(IOError, ClassType) and isinstance(msg, IOError):
            # Do the other branch recursively
            msg.args = self.sanitize(msg.args)
        elif isinstance(msg, TupleType):
            if len(msg) >= 4 and msg[0] == 'http error' and \
               isinstance(msg[3], InstanceType):
                # Remove the Message instance -- it may contain
                # a file object which prevents pickling.
                msg = msg[:3] + msg[4:]
        return msg

    def safeclose(self, f):
            url = f.geturl()
        except AttributeError:
            if url[:4] == 'ftp:' or url[:7] == 'file://':
                # Apparently ftp connections don't like to be closed
                # prematurely...
                text =

    def save_pickle(self, dumpfile=DUMPFILE):
        if not self.changed:
            self.note(0, "\nNo need to save checkpoint")
        elif not dumpfile:
            self.note(0, "No dumpfile, won't save checkpoint")
            self.note(0, "\nSaving checkpoint to %s ...", dumpfile)
            newfile = dumpfile + ".new"
            f = open(newfile, "wb")
            pickle.dump(self, f)
            except os.error:
            os.rename(newfile, dumpfile)
            self.note(0, "Done.")
            return 1

class Page:

    def __init__(self, text, url, verbose=VERBOSE, maxpage=MAXPAGE, checker=None):
        self.text = text
        self.url = url
        self.verbose = verbose
        self.maxpage = maxpage
        self.checker = checker

    def note(self, level, msg, *args):
        if self.checker:
            apply(self.checker.note, (level, msg) + args)
            if self.verbose >= level:
                if args:
                    msg = msg%args
                print msg

    def getlinkinfos(self):
        size = len(self.text)
        if size > self.maxpage:
            self.note(0, "Skip huge file %s (%.0f Kbytes)", self.url, (size*0.001))
            return []
        self.checker.note(2, "  Parsing %s (%d bytes)", self.url, size)
        parser = MyHTMLParser(verbose=self.verbose, checker=self.checker)
        rawlinks = parser.getlinks()
        base = urlparse.urljoin(self.url, parser.getbase() or "")
        infos = []
        for rawlink in rawlinks:
            t = urlparse.urlparse(rawlink)
            t = t[:-1] + ('',)
            rawlink = urlparse.urlunparse(t)
            link = urlparse.urljoin(base, rawlink)
            infos.append((link, rawlink))
        return infos

class MyStringIO(StringIO.StringIO):

    def __init__(self, url, info):
        self.__url = url
        self.__info = info

    def info(self):
        return self.__info

    def geturl(self):
        return self.__url

class MyURLopener(urllib.FancyURLopener):

    http_error_default = urllib.URLopener.http_error_default

    def __init__(*args):
        self = args[0]
        apply(urllib.FancyURLopener.__init__, args)
        self.addheaders = [
            ('User-agent', 'Python-webchecker/%s' % __version__),

    def http_error_401(self, url, fp, errcode, errmsg, headers):
        return None

    def open_file(self, url):
        path = urllib.url2pathname(urllib.unquote(url))
        if path[-1] != os.sep:
            url = url + '/'
        if os.path.isdir(path):
            indexpath = os.path.join(path, "index.html")
            if os.path.exists(indexpath):
                return self.open_file(url + "index.html")
                names = os.listdir(path)
            except os.error, msg:
                raise IOError, msg, sys.exc_traceback
            s = MyStringIO("file:"+url, {'content-type': 'text/html'})
            s.write('<BASE HREF="file:%s">\n' %
                    urllib.quote(os.path.join(path, "")))
            for name in names:
                q = urllib.quote(name)
                s.write('<A HREF="%s">%s</A>\n' % (q, q))
            return s
        return urllib.FancyURLopener.open_file(self, path)

class MyHTMLParser(sgmllib.SGMLParser):

    def __init__(self, verbose=VERBOSE, checker=None):
        self.myverbose = verbose # now unused
        self.checker = checker
        self.base = None
        self.links = {}

    def start_a(self, attributes):
        self.link_attr(attributes, 'href')

    def end_a(self): pass

    def do_area(self, attributes):
        self.link_attr(attributes, 'href')

    def do_img(self, attributes):
        self.link_attr(attributes, 'src', 'lowsrc')

    def do_frame(self, attributes):
        self.link_attr(attributes, 'src')

    def link_attr(self, attributes, *args):
        for name, value in attributes:
            if name in args:
                if value: value = string.strip(value)
                if value: self.links[value] = None

    def do_base(self, attributes):
        for name, value in attributes:
            if name == 'href':
                if value: value = string.strip(value)
                if value:
                    if self.checker:
                        self.checker.note(1, "  Base %s", value)
                    self.base = value

    def getlinks(self):
        return self.links.keys()

    def getbase(self):
        return self.base

if __name__ == '__main__':


Linux System Admin Tips: There are over 200 Linux tips and tricks in this article. That is over 100 pages covering everything from NTP, setting up 2 IP address on one NIC, sharing directories among several users, putting running jobs in the background, find out who is doing what on your system by examining open sockets and the ps command, how to watch a file, how to prevent even root from deleting a file, tape commands, setting up cron jobs, using rsync, using screen conveniently with emacs, how to kill every process for a user, security tips and a lot more. These tip grow weekly. The above link will download the text version for easy grep searching. There is also an html version here.

Breaking Firewalls with OpenSSH and PuTTY: If the system administrator deliberately filters out all traffic except port 22 (ssh), to a single server, it is very likely that you can still gain access other computers behind the firewall. This article shows how remote Linux and Windows users can gain access to firewalled samba, mail, and http servers. In essence, it shows how openSSH and Putty can be used as a VPN solution for your home or workplace.

MySQL Tips and Tricks: Find out who is doing what in MySQL and how to kill the process, create binary log files, connect, create and select with Perl and Java, remove duplicates in a table with the index command, rollback and how to apply, merging several tables into one, updating foreign keys, monitor port 3306 with the tcpdump command, creating a C API, complex selects, and much more.

Create a Live Linux CD - BusyBox and OpenSSH Included: These steps will show you how to create a functioning Linux system, with the latest 2.6 kernel compiled from source, and how to integrate the BusyBox utilities including the installation of DHCP. Plus, how to compile in the OpenSSH package on this CD based system. On system boot-up a filesystem will be created and the contents from the CD will be uncompressed and completely loaded into RAM -- the CD could be removed at this point for boot-up on a second computer. The remaining functioning system will have full ssh capabilities. You can take over any PC assuming, of course, you have configured the kernel with the appropriate drivers and the PC can boot from a CD. This tutorial steps you through the whole processes.

SQLite Tutorial : This article explores the power and simplicity of sqlite3, first by starting with common commands and triggers, then the attach statement with the union operation is introduced in a way that allows multiple tables, in separate databases, to be combined as one virtual table, without the overhead of copying or moving data. Next, the simple sign function and the amazingly powerful trick of using this function in SQL select statements to solve complex queries with a single pass through the data is demonstrated, after making a brief mathematical case for how the sign function defines the absolute value and IF conditions.

The Lemon Parser Tutorial: This article explains how to build grammars and programs using the lemon parser, which is faster than yacc. And, unlike yacc, it is thread safe.

How to Compile the 2.6 kernel for Red Hat 9 and 8.0 and get Fedora Updates: This is a step by step tutorial on how to compile the 2.6 kernel from source.

Virtual Filesystem: Building A Linux Filesystem From An Ordinary File. You can take a disk file, format it as ext2, ext3, or reiser filesystem and then mount it, just like a physical drive. Yes, it then possible to read and write files to this newly mounted device. You can also copy the complete filesystem, since it is just a file, to another computer. If security is an issue, read on. This article will show you how to encrypt the filesystem, and mount it with ACL (Access Control Lists), which give you rights beyond the traditional read (r) write (w) and execute (x) for the 3 user groups file, owner and other.

Working With Time: What? There are 61 seconds in a minute? We can go back in time? We still tell time by the sun?

Chirico img Mike Chirico, a father of triplets (all girls) lives outside of Philadelphia, PA, USA. He has worked with Linux since 1996, has a Masters in Computer Science and Mathematics from Villanova University, and has worked in computer-related jobs from Wall Street to the University of Pennsylvania. His hero is Paul Erdos, a brilliant number theorist who was known for his open collaboration with others.

Mike's notes page is souptonuts. For open source consulting needs, please send an email to All consulting work must include a donation to Logo Logo