robots.txt parser
HTTP /api/v1/http/robotsFetch and parse robots.txt — User-agent groups, Disallow/Allow, Sitemap, Crawl-delay.
https://answers.google.com/robots.txt
200
2912 bytes
0 User-agent groups
Raw robots.txt
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"><html><head><title>Google Answers</title>
<base href="http://answers.google.com/">
<link rel="stylesheet" href="answers/answers.css">
<style type="text/css">
body {
font-family: arial, sans-serif;
font-size: small;
}
#head-nav {
text-align: right;
white-space: nowrap;
}
.directions {
margin-bottom: 1em;
padding: 2px;
width: 600px;
}
table {
font-size: 100%;
}
h2 {
font-size: 120%;
text-align: center;
color: #49188f;
}
.footer {
font-size: 93%;
}
.disclaimer {
width: 600px;
margin: 1em 0;
font-size: 93%;
}
.categories {
border-top: 1px solid #49188f;
border-bottom: 1px solid #49188f;
}
.categories td {
padding: 4px 0;
}
.categories td.left {
padding-right: 20px;
}
.p {
margin-top: 1em;
font-size: 93%;
}
</style></head>
<body>
<center><form action="http://www.google.com/search" method="get"><input type="hidden" name="as_sitesearch" value="answers.google.com">
<table border="0" cellpadding="2" cellspacing="0" class="top"><tr><td><img alt="Google Answers" src="answers/answers-logo-sm.gif" height="59" width="143"></td>
<td> </td>
<td><input type="text" name="q" value="" size="30"></td>
<td><input type="submit" name="btnG" value="Search Answers"></td></tr></table></form>
<table class="directions"><tr><td><h2>Google Answers is no longer accepting questions.</h2>
<p>We're sorry, but Google Answers has been retired, and is no
longer accepting new questions. <br> Search or browse the existing
Google Answers index by using the search
box above or the category links below.</p></td></tr></table>
<table class="categories"><tr><td class="left"><a href="answers/browse/1000.html">Arts and Entertainment</a></td>
<td> </td>
<td class="right"><a href="answers/browse/1500.html">Reference, Education and News</a></td></tr> <tr><td class="left"><a href="answers/browse/1100.html">Business and Money</a></td>
<td> </td>
<td class="right"><a href="answers/browse/1600.html">Relationships and Society</a></td></tr> <tr><td class="left"><a href="answers/browse/1200.html">Computers</a></td>
<td> </td>
<td class="right"><a href="answers/browse/1700.html">Science</a></td></tr> <tr><td class="left"><a href="answers/browse/1300.html">Family and Home</a></td>
<td> </td>
<td class="right"><a href="answers/browse/1800.html">Sports and Recreation</a></td></tr> <tr><td class="left"><a href="answers/browse/1400.html">Health</a></td>
<td> </td>
<td class="right"><a href="answers/browse/1900.html">Miscellaneous</a></td></tr></table></center>
<br>
<div align="center"><div class="footer"><a href="http://www.google.com">Google Home</a> -
<a href="answers/faq.html">Answers FAQ</a> -
<a href="answers/termsofservice.html">Terms of Service</a> -
<a href="http://www.google.com/privacy.html">Privacy Policy</a></div></div></body></html>
How to use robots.txt parser
-
1
Paste your input
Enter the value at the top — domain, IP, URL, email, ASN, hash, whatever fits this tool. The smart input auto-detects type.
-
2
Click "Inspect"
host.tools issues real probes (DNS, HTTP, TCP, TLS, WHOIS where applicable) and renders the result in milliseconds.
-
3
Open the API tab
Every web tool has a sibling /api/v1/http/robots JSON endpoint with the same payload. One copy-as-curl click and you're scripting it.
Why this matters
Headers are how the modern web declares its security posture. Auditing them is the highest-ROI thing you can do this week.
API equivalent
/api/v1/http/robots?q=https%3A%2F%2Fanswers.google.com
curl -s '/api/v1/http/robots?q=https%3A%2F%2Fanswers.google.com'
Embed this tool
<iframe src="/http/robots?q={INPUT}&embed=1"
width="100%" height="600" frameborder="0"></iframe>
Drop into any HTML page. The embed=1 flag hides nav and footer.
Related tools
More in HTTP
Sidebar — skyscraper · 160x600 ·
advertise here
Between content (square) · 300x250 ·
advertise here
FAQ · robots.txt parser
Common questions
Is robots.txt parser free?
Yes — every tool is free on the web with a 200/hour rate limit per IP. The matching API endpoint /api/v1/http/robots is free up to 100 requests/hour, no key required.
Where does the data come from?
Real-time probes against authoritative sources (DNS root, RIRs, registries, the target server itself), plus partner data feeds from hostinfo.com (GeoIP/ASN) and hostcheck.com (reputation).
How fresh are the results?
Live by default. Cached for 5 minutes to make repeat queries instant; pass
?nocache=1 for a forced refresh.Can I run this from the command line?
Yes — every tool ships with a copy-as-curl. There's also an official CLI:
host.tools http robots YOUR_INPUT.Can I monitor results over time?
Pro tier lets you schedule any tool to run every 1/5/15/60 min and alert on diff. See monitors.
host.tools Pro
Run robots.txt parser on a schedule. Get pinged when it changes.
Pro gets you bulk lookups, monitors, webhook alerts, history, exports and 10,000 API calls/day. $19/mo.
- ✓Schedule any tool — every 1, 5, 15, 60 min
- ✓Diff against last run, alert on change
- ✓Webhook + email + Slack + PagerDuty + OpsGenie
- ✓Bulk CSV upload, 1,000 inputs per job
- ✓Export results as CSV / NDJSON / Excel
- ✓90-day history, comparison view