Claw Mart
← Back to Blog
March 20, 202610 min readClaw Mart Team

How to Automate Meta Title and Description Generation at Scale

How to Automate Meta Title and Description Generation at Scale

How to Automate Meta Title and Description Generation at Scale

Let's be honest about meta titles and descriptions: nobody got into SEO because they were excited about writing 160-character summaries for thousands of pages. Yet here we are, in an industry where the average site has hundreds or thousands of pages, each needing a unique, keyword-optimized, click-worthy title and description — and most of us are still doing it by hand.

I've watched teams spend entire weeks doing nothing but writing meta tags. I've seen agencies charge $50 per page for the privilege. And I've watched Google rewrite roughly half of those carefully crafted titles anyway.

It's time to automate this. Not with some janky template that stamps "[Product Name] | [Brand]" on everything, but with an actual intelligent system that understands page content, knows SEO best practices, and generates output that's genuinely good. Here's how to build that system with OpenClaw — step by step, no hype, just the practical mechanics.

The Manual Workflow (And Why It's Brutal)

Before we automate anything, let's be precise about what "writing meta titles and descriptions" actually involves when done properly. This is the workflow an experienced SEO follows for each page:

Step 1: Keyword and intent research. You pull up your keyword tool, identify the primary target keyword for the page, check search volume and difficulty, and — critically — analyze what Google is actually showing in the SERPs for that query. Are they showing how-to results? Product pages? Listicles? This shapes everything.

Step 2: Page content analysis. You read the page (or at least skim the H1, intro, subheadings, and conclusion) to understand what the page actually delivers. You can't write a good meta description for a page you haven't looked at.

Step 3: Title creation. You craft something under ~60 characters that front-loads the primary keyword, communicates clear value, includes the brand name if appropriate, and doesn't duplicate any other title on the site.

Step 4: Meta description writing. 140–160 characters. Needs the keyword worked in naturally, a reason to click, ideally a call-to-action or unique differentiator. This is copywriting, not just summarization.

Step 5: Uniqueness and compliance check. No duplicates across the site. Consistent brand voice. Nothing that makes legal nervous.

Step 6: Implementation. Log into the CMS, find the right fields (or install a plugin that gives you those fields), paste everything in.

Step 7: Monitor and iterate. Weeks later, check Google Search Console to see if CTR actually improved. If Google rewrote your title, figure out why.

An experienced SEO spends 15–40 minutes per page on this. For a 500-page site, that's 150–300 hours of work. At agency rates of $25–75 per page, you're looking at $12,500 to $37,500 just for meta tags.

Most companies don't actually do this for every page. They manually optimize maybe the top 10–20% of pages by traffic and template the rest. Which means 80–90% of their pages have generic, underperforming metadata. That's the current state of things.

What Makes This Painful

The time and cost alone are bad enough, but there are compounding problems:

Google rewrites your work constantly. Ahrefs' studies from 2022–2026 show Google rewrites title tags in 34–60% of cases and meta descriptions even more often. This creates a demoralizing feedback loop: why spend 30 minutes perfecting a meta description if Google's just going to replace it with a random sentence from your page?

Staleness is everywhere. Pages get updated — new sections, revised pricing, different positioning — but nobody goes back to update the meta tags. I've audited sites where the meta description references a product feature that was removed two years ago.

Duplicates are rampant. A 2023 Ahrefs study found that only 42% of pages in their dataset had unique meta descriptions. E-commerce sites are the worst offenders — hundreds of product pages with identical or near-identical descriptions.

Consistency versus creativity is a real tension. At scale, you need consistency (brand voice, formatting conventions, keyword patterns). But consistency tends to produce boring, cookie-cutter output that doesn't stand out in SERPs. Hand-writing solves the creativity problem but destroys consistency.

The opportunity cost is massive. Every hour your SEO team spends writing meta descriptions is an hour not spent on content strategy, link building, technical SEO, or conversion optimization — work that typically has higher leverage.

What AI Can Handle Now

Here's where I want to be honest rather than breathless. AI is not going to replace your senior SEO's judgment on which pages to prioritize or how to position your brand against competitors. But it can absolutely handle the mechanical bulk of this workflow, and it can do it well.

What AI does effectively today:

  • Content extraction and summarization. Given a URL or page content, AI can identify the core topic, key points, and value proposition accurately.
  • First-draft generation. Titles and descriptions that are keyword-optimized, properly length-constrained, and genuinely readable. Not perfect, but a solid 70–80% of the way there.
  • Bulk processing. Thousands of pages in hours rather than weeks.
  • Constraint enforcement. Character limits, keyword placement rules, no-duplicate checks — AI follows formatting rules more consistently than humans do over large batches.
  • Variation generation. Need three title options per page for A/B testing? Trivial at scale.
  • Duplicate detection and remediation. Flagging identical or near-identical metas across a site and suggesting unique alternatives.

What AI still struggles with:

  • Truly compelling emotional hooks that differentiate you in a crowded SERP
  • Nuanced brand voice (it trends generic or slightly salesy)
  • Strategic page prioritization
  • Legal and compliance sensitivity
  • Understanding competitive context (what your competitors' titles look like and how to stand out)

The practical upshot: AI should generate the first draft for every page, humans should review and refine the pages that matter most. This is the "human-in-the-loop" model that the best agencies have already adopted, and it's exactly what we can build systematically with OpenClaw.

Step by Step: Building the Automation with OpenClaw

Here's the concrete implementation. We're going to build an AI agent on OpenClaw that takes a list of URLs (or page content), generates optimized meta titles and descriptions, and outputs them in a format ready for CMS import.

Step 1: Define Your Inputs

Your agent needs structured input for each page. At minimum:

{
  "url": "https://example.com/blue-widget",
  "page_content": "Full text or key sections of the page",
  "primary_keyword": "blue widgets",
  "secondary_keywords": ["buy blue widgets", "best blue widgets"],
  "brand_name": "WidgetCo",
  "page_type": "product"
}

You can get this data by crawling your site with Screaming Frog or Sitebulb, exporting to CSV, and enriching it with keyword data from whatever SEO tool you use. The point is: feed the agent structured data, not just a naked URL.

Step 2: Build the Agent's Instructions in OpenClaw

This is where the actual intelligence lives. In OpenClaw, you're configuring an agent with specific instructions that encode your SEO knowledge. Here's the kind of system prompt you'd build into your agent:

You are an SEO meta tag specialist. For each page provided, generate:

1. A meta title (max 60 characters including spaces)
2. A meta description (max 155 characters including spaces)
3. An alternate title variation for A/B testing

Rules:
- Place the primary keyword within the first 30 characters of the title when possible
- Include brand name at the end of the title, separated by a pipe: " | BrandName"
- Meta description must include the primary keyword naturally (not forced)
- Meta description must contain a clear value proposition or call-to-action
- Never start the meta description with the brand name
- Each output must be unique — no two pages should have identical titles or descriptions
- Match the page type tone: product pages should be benefit-driven, blog posts should be curiosity-driven, category pages should be scope-defining
- If the page content is thin, flag it for human review rather than generating low-quality output

Step 3: Add Contextual Intelligence

The difference between bad automation and good automation is context. In OpenClaw, you can give your agent access to reference data that makes its output smarter:

  • A list of existing meta titles/descriptions on the site (so it can check for duplicates before generating new ones)
  • Your brand voice guidelines (e.g., "We never use exclamation marks. We use direct, confident language. We avoid superlatives like 'best' or 'amazing.'")
  • Competitor SERP examples for key terms (so the agent understands what it's competing against)
  • Performance benchmarks — if you feed in CTR data from Search Console, the agent can learn from what's working

Step 4: Process in Batches

Don't try to do your entire site in one pass. Build your OpenClaw workflow to process pages in manageable batches — say 50–100 at a time — with output validation between batches.

For each batch, the agent should output structured data:

{
  "url": "https://example.com/blue-widget",
  "meta_title": "Blue Widgets for Home & Office | WidgetCo",
  "meta_title_alt": "Shop Blue Widgets — Free Shipping Over $50 | WidgetCo",
  "meta_description": "Find durable blue widgets built for everyday use. Browse styles for home and office with free shipping on orders over $50.",
  "character_count_title": 43,
  "character_count_description": 148,
  "flags": [],
  "confidence": "high"
}

The flags field is important. Your agent should be configured to flag pages where:

  • Page content is too thin to generate a meaningful description
  • The primary keyword doesn't appear in the page content (misalignment)
  • The generated output exceeds character limits after multiple attempts
  • The page appears to be a duplicate of another page

Step 5: Human Review Layer

This is non-negotiable. Set up a review queue where a human looks at:

  • All flagged pages (the agent identified issues)
  • Top pages by traffic or revenue (your most important 10–20%)
  • A random sample of 5–10% from each batch (quality control)

Everything else ships as-is. This is how you get 80% of the time savings while maintaining quality where it counts.

Step 6: CMS Integration

Your final output should match whatever your CMS needs for bulk import. For WordPress sites using Yoast or Rank Math, that's typically a CSV with URL, title, and description columns that you can import via plugin. For Shopify, you can use their bulk editor or an app like Smart SEO. For custom CMS setups, you'd map to your database fields.

OpenClaw's output is structured data, so reformatting for your specific CMS is straightforward.

What Still Needs a Human

I want to be direct about where automation ends and human expertise begins, because overselling AI is how you end up with a site full of bland, interchangeable meta tags that don't move the needle.

Humans should own:

  • Page prioritization and strategy. Which pages are worth the most to your business? Which keywords are you actually trying to win? The agent executes; the human decides what matters.
  • Competitive differentiation. If all ten results on page one have AI-generated titles, they're going to start sounding identical. A human can look at the SERP and write something that breaks the pattern.
  • Emotional and persuasive copy for high-value pages. Your homepage, your top 20 landing pages, your highest-revenue product categories — these deserve hand-crafted meta tags. Use the AI draft as a starting point, but apply real copywriting skill.
  • Brand voice calibration. Review the first few batches carefully. Adjust the agent's instructions in OpenClaw until the output sounds like you, not like generic SEO copy.
  • Performance analysis. Monitor CTR in Search Console after deploying AI-generated metas. Identify underperformers. Feed that learning back into the agent's instructions. This feedback loop is what separates a one-time automation from a system that improves over time.

Expected Time and Cost Savings

Let's do the math on a real scenario.

A 1,000-page e-commerce site:

  • Manual approach: 1,000 pages × 25 minutes average = ~417 hours. At $50/hour, that's $20,800. Timeline: 8–12 weeks with a small team.
  • Template-only approach: Maybe 20 hours of setup, but output quality is low. CTR underperforms hand-written by 15–25%.
  • OpenClaw agent with human review: Initial setup takes 4–8 hours (building the agent, configuring rules, preparing input data). Bulk generation takes hours, not weeks. Human review of top 200 pages and flagged items: ~40 hours. Total: roughly 50 hours. Timeline: 1–2 weeks.

That's an 85–90% reduction in time and a proportional cost reduction, with output quality that's close to manual for most pages and identical to manual for your top pages (because a human reviewed those).

For agencies managing multiple client sites, the economics are even more compelling. Build your OpenClaw agent once with customizable brand voice parameters, and you can deploy it across every client. The setup cost amortizes to nearly nothing.

The ongoing maintenance story is equally good. When you update pages, re-run the agent on just the changed URLs. When Google Search Console shows CTR drops, flag those pages for regeneration. The system stays current without the classic problem of meta tags rotting while pages evolve.

Where to Go From Here

If you're managing more than a few dozen pages and still writing meta titles and descriptions entirely by hand, you're spending time on a problem that AI handles well. Not perfectly — but well enough that the remaining imperfections are better addressed through targeted human review than through universal manual effort.

The practical next step: pick a section of your site — maybe 50–100 product pages or blog posts — and build this workflow in OpenClaw. Measure the output quality honestly. Compare CTR before and after. Adjust the agent's instructions based on what you learn.

You can find the tools to build this on the Claw Mart marketplace, where pre-built agents and components are available for exactly these kinds of SEO workflows. If the automation you need doesn't exist yet, that's also an opportunity — build it and list it on Claw Mart through Clawsourcing. The demand for practical, well-built SEO automation agents is growing fast, and the teams who build them first are the ones agencies and in-house teams will buy from.

Stop spending weeks on meta tags. Build the system once, improve it continuously, and redirect your time to the SEO work that actually requires a human brain.

Claw Mart Daily

Get one AI agent tip every morning

Free daily tips to make your OpenClaw agent smarter. No spam, unsubscribe anytime.

More From the Blog