A favicon of Web Crawl Integration

Web Crawl Integration

MCP server tailored to connecting web crawler data and archives

Installation

Installing for Claude Desktop

Option 1: One-Command Installation

npx mcpbar@latest install pragmar/mcp_server_webcrawl -c claude

This command will automatically install and configure the Web Crawl Integration MCP server for your selected client.

Option 2: Manual Configuration

Run the command below to open your configuration file:

npx mcpbar@latest edit -c claude

After opening your configuration file, copy and paste this configuration:

View JSON configuration
{
  "mcpServers": {
    "Web Crawl Integration": {
      "command": "uvx",
      "args": [
        "mcp-server-webcrawl"
      ],
      "env": {}
    }
  }
}

MCP Server Webcrawl

Website | Github | Docs | PyPi

mcp-server-webcrawl

Advanced search and retrieval for web crawler data. With mcp-server-webcrawl, your AI client filters and analyzes web content under your direction or autonomously. The server includes a full-text search interface with boolean support, and resource filtering by type, HTTP status, and more.

mcp-server-webcrawl provides the LLM a complete menu with which to search your web content, and works with a variety of web crawlers:

Crawler/FormatDescriptionPlatformsSetup Guide
WARCStandard web archive formatN/ASetup Guide
wgetCommand-line website mirroring toolmacOS/LinuxSetup Guide
InterroBotGUI crawler and analyzermacOS/WindowsSetup Guide
KatanaSecurity-focused crawlermacOS/Windows/LinuxSetup Guide
SiteOneGUI crawler and analyzermacOS/Windows/LinuxSetup Guide

mcp-server-webcrawl is free and open source, and requires Claude Desktop and Python (>=3.10). It is installed on the command line, via pip install:

pip install mcp-server-webcrawl

For step-by-step MCP server setup, refer to the Setup Guides.

Features

  • Claude Desktop ready
  • Multi-crawler compatible
  • Filter by type, status, and more
  • Boolean search support
  • Support for Markdown and snippets
  • Roll your own website knowledgebase

Prompt Routines

mcp-server-webcrawl provides the toolkit necessary to search web crawl data freestyle, figuring it out as you go, reacting to each query. This is what it was designed for.

It is also capable of running routines (as prompts). You can write these yourself, or use the ones provided. These prompts are copy and paste, and used as raw Markdown. They are enabled by the advanced search provided to the LLM; queries and logic can be embedded in a procedural set of instructions, or even an input loop as is the case with Gopher Service.

PromptDownloadCategoryDescription
🔍 SEO Auditauditseo.mdauditTechnical SEO (search engine optimization) analysis. Covers the basics, with options to dive deeper.
🔗 404 Auditaudit404.mdauditBroken link detection and pattern analysis. Not only finds issues, but suggests fixes.
⚡ Performance Auditauditperf.mdauditWebsite speed and optimization analysis. Real talk.
📁 File Auditauditfiles.mdauditFile organization and asset analysis. Discover the composition of your website.
🌐 Gopher Interfacegopher.mdinterfaceAn old-fashioned search interface inspired by the Gopher clients of yesteryear.
⚙️ Search Testtestsearch.mdself-testA battery of tests to check for Boolean logical inconsistencies in the search query parser and subsequent FTS5 conversion.

If you want to shortcut the site selection (one less query), paste the markdown and in the same request, type "run pasted for [site name or URL]." It will figure it out. When pasted without additional context, you should be prompted to select from a list of crawled sites.

Boolean Search Syntax

The query engine supports field-specific (field: value) searches and complex boolean expressions. Fulltext is supported as a combination of the url, content, and headers fields.

While the API interface is designed to be consumed by the LLM directly, it can be helpful to familiarize yourself with the search syntax. Searches generated by the LLM are inspectable, but generally collapsed in the UI. If you need to see the query, expand the MCP collapsable.

Example Queries

Query ExampleDescription
privacyfulltext single keyword match
"privacy policy"fulltext match exact phrase
boundar*fulltext wildcard matches results starting with boundar (boundary, boundaries)
id: 12345id field matches a specific resource by ID
url: example.com/somedirurl field matches results with URL containing example.com/somedir
type: htmltype field matches for HTML pages only
status: 200status field matches specific HTTP status codes (equal to 200)
status: >=400status field matches specific HTTP status code (greater than or equal to 400)
content: h1content field matches content (HTTP response body, often, but not always HTML)
headers: text/xmlheaders field matches HTTP response headers
privacy AND policyfulltext matches both
privacy OR policyfulltext matches either
policy NOT privacyfulltext matches policies not containing privacy
(login OR signin) AND formfulltext matches fullext login or signin with form
type: html AND status: 200fulltext matches only HTML pages with HTTP success

Field Search Definitions

Field search provides search precision, allowing you to specify which columns of the search index to filter. Rather than searching the entire content, you can restrict your query to specific attributes like URLs, headers, or content body. This approach improves efficiency when looking for specific attributes or patterns within crawl data.

FieldDescription
iddatabase ID
urlresource URL
typeenumerated list of types (see types table)
sizefile size in bytes
statusHTTP response codes
headersHTTP response headers
contentHTTP body—HTML, CSS, JS, and more

Content Types

Crawls contain resource types beyond HTML pages. The type: field search allows filtering by broad content type groups, particularly useful when filtering images without complex extension queries. For example, you might search for type: html NOT content: login to find pages without "login," or type: img to analyze image resources. The table below lists all supported content types in the search system.

TypeDescription
htmlwebpages
iframeiframes
imgweb images
audioweb audio files
videoweb video files
fontweb font files
styleCSS stylesheets
scriptJavaScript files
rssRSS syndication feeds
textplain text content
pdfPDF files
docMS Word documents
otheruncategorized

Extras

The extras parameter provides additional processing options, transforming HTTP data (markdown, snippets, xpath), or connecting the LLM to external data (thumbnails). These options can be combined as needed to achieve the desired result format.

ExtraDescription
thumbnailsGenerates base64 encoded images to be viewed and analyzed by AI models. Enables image description, content analysis, and visual understanding while keeping token output minimal. Works with images, which can be filtered using type: img in queries. SVG is not supported.
markdownProvides the HTML content field as concise Markdown, reducing token usage and improving readability for LLMs. Works with HTML, which can be filtered using type: html in queries.
snippetsMatches fulltext queries to contextual keyword usage within the content. When used without requesting the content field (or markdown extra), it can provide an efficient means of refining a search without pulling down the complete page contents. Also great for rendering old school hit-highlighted results as a list, like Google search in 1999. Works with HTML, CSS, JS, or any text-based, crawled file.
xpathExtracts XPath selector data, used in scraping HTML content. Use XPath's text() selector for text-only, element selectors return outerHTML. Only supported with type: html, other types will be ignored. One or more XPath selectors (//h1, count(//h1), etc.) can be requested, using the extrasXpath argument.

Extras provide a means of producing token-efficient HTTP content responses. Markdown produces roughly 1/3 the bytes of the source HTML, snippets are generally 500 or so bytes per result, and XPath can be as specific or broad as you choose. The more focused your requests, the more results you can fit into your LLM session.

The idea, of course, is that the LLM takes care of this for you. If you notice your LLM developing an affinity to the "content" field (full HTML), a nudge in chat to budget tokens using the extras feature should be all that is needed.

Share:
Details:
  • Stars


    6
  • Forks


    1
  • Last commit


    4 days ago
  • Repository age


    3 months
View Repository

Auto-fetched from GitHub .

MCP servers similar to Web Crawl Integration:

 

 
 
  • Stars


  • Forks


  • Last commit


 

 
 
  • Stars


  • Forks


  • Last commit


 

 
 
  • Stars


  • Forks


  • Last commit