Please whitelist cPanel in your adblocker so that you’re able to see our version release promotions, thanks!

The Community Forums

Interact with an entire community of cPanel & WHM users!

sitemap problem

Discussion in 'General Discussion' started by paulmulder, Dec 19, 2007.

  1. paulmulder

    paulmulder Member

    Jun 5, 2004
    Likes Received:
    Trophy Points:
    I apologise in advance if I have chosen the wrong category but I wonder if anyone can help with a sitemap problem I have. When I use the option which makes use of a urllist.txt everything works fine.

    However I need to make a sitemap for a website which has way too many pages for me to be able to create a urllist first.

    I have adapted the google file, and have made a mistake herin. The problem is that I don't know where the mistake is.

    I am using:

    <?xml version="1.0" encoding="UTF-8"?>

    <directory path="/home/myusername/public_html" url= />
    default_file="index.htm" />


    The error I get is:

    Traceback (most recent call last):
    File "", line 2199, in ?
    sitemap = CreateSitemapFromFile(flags['config'], suppress_notify)
    File "", line 2147, in CreateSitemapFromFile
    xml.sax.parse(configpath, sitemap)
    File "/usr/local/lib/python2.4/xml/sax/", line 33, in parse
    File "/usr/local/lib/python2.4/xml/sax/", line 107, in parse
    xmlreader.IncrementalParser.parse(self, source)
    File "/usr/local/lib/python2.4/xml/sax/", line 123, in parse
    File "/usr/local/lib/python2.4/xml/sax/", line 207, in feed
    self._parser.Parse(data, isFinal)
    File "/usr/local/lib/python2.4/xml/sax/", line 300, in start_element
    self._cont_handler.startElement(name, AttributesImpl(attrs))
    File "", line 2031, in startElement
    self._inputs.append(InputDirectory(attributes, self._base_url))
    File "", line 877, in __init__
    if not url.startswith(base_url):
    TypeError: expected a character buffer object

    Does anyone have any suggestions?
  2. capitalwest

    capitalwest Registered

    Jan 8, 2008
    Likes Received:
    Trophy Points:
    Try this. I use this for generating sitemaps by getting the script to crawl all directories and files.

    Just change username and domain name to your own and edit the type of index file you're using (e.g. index.html index.php). Copy everything from <?xml to </site>

    <?xml version="1.0" encoding="UTF-8" ?>
    <!-- example configuration script

    This file specifies a set of sample input parameters for the client.

    You should copy this file into "config.xml" and modify it for
    your server.


    ** MODIFY **
    The "site" node describes your basic web site.

    Required attributes:
    base_url - the top-level URL of the site being mapped
    store_into - the webserver path to the desired output file.
    This should end in '.xml' or '.xml.gz'
    (the script will create this file)

    Optional attributes:
    verbose - an integer from 0 (quiet) to 3 (noisy) for
    how much diagnostic output the script gives
    - disables notifying search engines about the new map
    (same as the "testing" command-line argument.)
    - names a character encoding to use for URLs and
    file paths. (Example: "UTF-8")

    <site base_url="" store_into="/home/yourusername/public_html/sitemap.xml" verbose="1">

    All the various nodes in this section control where the script
    looks to find URLs.

    MODIFY or DELETE these entries as appropriate for your server.

    ** MODIFY **
    "directory" nodes tell the script to walk the file system
    and include all files and directories in the Sitemap.

    Required attributes:
    path - path to begin walking from
    url - URL equivalent of that path

    Optional attributes:
    default_file - name of the index or default file for directory URLs

    <directory path="/home/yourusername/public_html" url="" default_file="index.php" />

    Filters specify wild-card patterns that the script compares
    against all URLs it finds. Filters can be used to exclude
    certain URLs from your Sitemap, for instance if you have
    hidden content that you hope the search engines don't find.

    Filters can be either type="wildcard", which means standard
    path wildcards (* and ?) are used to compare against URLs,
    or type="regexp", which means regular expressions are used
    to compare.

    Filters are applied in the order specified in this file.

    An action="drop" filter causes exclusion of matching URLs.
    An action="pass" filter causes inclusion of matching URLs,
    shortcutting any other later filters that might also match.
    If no filter at all matches a URL, the URL will be included.
    Together you can build up fairly complex rules.

    The default action is "drop".
    The default type is "wildcard".

    You can MODIFY or DELETE these entries as appropriate for
    your site. However, unlike above, the example entries in
    this section are not contrived and may be useful to you as
    they are.

    <!-- Exclude URLs that end with a '~' (IE: emacs backup files)
    <filter action="drop" type="wildcard" pattern="*~" />
    <!-- Exclude URLs within UNIX-style hidden files or directories
    <filter action="drop" type="regexp" pattern="/\.[^/]*" />
    <!-- Exclude URLs that end with a '.xml' extension
    <filter action="drop" type="wildcard" pattern="*.xml" />
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...

Share This Page

  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice