host.tools

robots.txt parser

HTTP /api/v1/http/robots

Fetch and parse robots.txt — User-agent groups, Disallow/Allow, Sitemap, Crawl-delay.

https://api.pockethaul.app/robots.txt 200 1248 bytes 0 User-agent groups
Raw robots.txt
# As a condition of accessing this website, you agree to abide by the following
# content signals:

# (a)  If a content-signal = yes, you may collect content for the corresponding
#      use.
# (b)  If a content-signal = no, you may not collect content for the
#      corresponding use.
# (c)  If the website operator does not include a content signal for a
#      corresponding use, the website operator neither grants nor restricts
#      permission via content signal with respect to the corresponding use.

# The content signals and their meanings are:

# search:   building a search index and providing search results (e.g., returning
#           hyperlinks and short excerpts from your website's contents). Search does not
#           include providing AI-generated search summaries.
# ai-input: inputting content into one or more AI models (e.g., retrieval
#           augmented generation, grounding, or other real-time taking of content for
#           generative AI search answers).
# ai-train: training or fine-tuning AI models.

# ANY RESTRICTIONS EXPRESSED VIA CONTENT SIGNALS ARE EXPRESS RESERVATIONS OF
# RIGHTS UNDER ARTICLE 4 OF THE EUROPEAN UNION DIRECTIVE 2019/790 ON COPYRIGHT
# AND RELATED RIGHTS IN THE DIGITAL SINGLE MARKET.
How to use robots.txt parser
  1. 1
    Paste your input

    Enter the value at the top — domain, IP, URL, email, ASN, hash, whatever fits this tool. The smart input auto-detects type.

  2. 2
    Click "Inspect"

    host.tools issues real probes (DNS, HTTP, TCP, TLS, WHOIS where applicable) and renders the result in milliseconds.

  3. 3
    Open the API tab

    Every web tool has a sibling /api/v1/http/robots JSON endpoint with the same payload. One copy-as-curl click and you're scripting it.

Why this matters

Headers are how the modern web declares its security posture. Auditing them is the highest-ROI thing you can do this week.

API equivalent
/api/v1/http/robots?q=https%3A%2F%2Fapi.pockethaul.app
curl -s '/api/v1/http/robots?q=https%3A%2F%2Fapi.pockethaul.app'
Embed this tool
<iframe src="/http/robots?q={INPUT}&embed=1"
  width="100%" height="600" frameborder="0"></iframe>

Drop into any HTML page. The embed=1 flag hides nav and footer.

FAQ · robots.txt parser

Common questions

Is robots.txt parser free?
Yes — every tool is free on the web with a 200/hour rate limit per IP. The matching API endpoint /api/v1/http/robots is free up to 100 requests/hour, no key required.
Where does the data come from?
Real-time probes against authoritative sources (DNS root, RIRs, registries, the target server itself), plus partner data feeds from hostinfo.com (GeoIP/ASN) and hostcheck.com (reputation).
How fresh are the results?
Live by default. Cached for 5 minutes to make repeat queries instant; pass ?nocache=1 for a forced refresh.
Can I run this from the command line?
Yes — every tool ships with a copy-as-curl. There's also an official CLI: host.tools http robots YOUR_INPUT.
Can I monitor results over time?
Pro tier lets you schedule any tool to run every 1/5/15/60 min and alert on diff. See monitors.
host.tools Pro

Run robots.txt parser on a schedule. Get pinged when it changes.

Pro gets you bulk lookups, monitors, webhook alerts, history, exports and 10,000 API calls/day. $19/mo.

  • Schedule any tool — every 1, 5, 15, 60 min
  • Diff against last run, alert on change
  • Webhook + email + Slack + PagerDuty + OpsGenie
  • Bulk CSV upload, 1,000 inputs per job
  • Export results as CSV / NDJSON / Excel
  • 90-day history, comparison view