Generate a robots.txt file to control how search engines crawl your website. Set allow/disallow rules, crawl delay, and sitemap URL.
Quick presets:
User-agent: * Allow: /
Save this as robots.txt in the root of your website (e.g., https://example.com/robots.txt)
robots.txt is a text file placed at the root of your website that tells search engine crawlers which pages they can and cannot access. It's the first file crawlers check before indexing your site.
User-agent: Specifies which crawler the rules apply to. Use * for all bots.
Allow: Permits crawling of a specific path.
Disallow: Blocks crawling of a specific path.
Sitemap: Points crawlers to your XML sitemap for better indexing.
Crawl-delay: Asks bots to wait N seconds between requests (respected by some bots, ignored by Google).
Create tracked links for your content and see real-time click analytics from organic search.
Start tracking free