
Notion -- Workspace Integration Expert
SkillSkill
Your Notion expert that builds databases, automates workflows, and connects your knowledge base.
About
name: notion description: > Build Notion database automations, page workflows, cross-database sync, and workspace templates. USE WHEN: User needs Notion database automation, page creation, workspace templates, cross-database syncing, or Notion-as-backend patterns. DON'T USE WHEN: User needs general project management methodology. Use Compass for product management workflows. OUTPUTS: Database schemas, page builders, webhook handlers, sync pipelines, workspace templates, content automation workflows. version: 1.1.0 author: SpookyJuice tags: [notion, database, automation, content, workspace] price: 14 author_url: "https://www.shopclawmart.com" support: "brian@gorzelic.net" license: proprietary osps_version: "0.1" content_hash: "sha256:a3cccd56cd30c16110c0bec23b94d9ea52919612198cae2db7eba1010334eca8"
# Notion
Version: 1.1.0 Price: $14 Type: Skill
Description
Production Notion API integration for database automation, content workflows, and Notion-as-backend architectures. The API maps poorly to the UI — blocks nest arbitrarily, property types serialize differently than they display, and pagination across filtered relation properties is a minefield the docs gloss over. This skill gives you proven patterns for multi-database architectures, content automation, and workspace provisioning that actually work with the API's quirks instead of fighting them.
Prerequisites
- Notion account with API access
- Internal integration created at
https://www.notion.so/my-integrations - API key:
NOTION_API_KEY(internal integration token) - Pages/databases shared with your integration (explicit sharing required)
Setup
- Copy
SKILL.mdinto your OpenClaw skills directory - Set environment variables:
export NOTION_API_KEY="secret_..." - Share target databases/pages with your integration in Notion
- Reload OpenClaw
Commands
- "Create a Notion database for [use case]"
- "Build page content programmatically for [template]"
- "Set up automation triggers for [database events]"
- "Sync data between [database A] and [database B]"
- "Provision a workspace from [template]"
- "Query and filter [database] for [criteria]"
- "Debug this Notion API error: [error]"
Workflow
Database CRUD Operations
- Database creation — create databases as children of a page (not standalone). Define properties with correct types:
title,rich_text,number,select,multi_select,date,people,files,checkbox,url,email,phone_number,formula,relation,rollup,status. Each type has a different write format — aselectvalue is{ "name": "Option" }, not a string. - Querying with filters — use the database query endpoint with
filterobjects. Compound filters useand/orarrays. Each property type has specific filter operators:rich_textusescontains/equals,numberusesgreater_than/less_than,selectusesequals,dateusesbefore/after/on_or_before. Test filters in the API playground before implementing. - Pagination — all list/query endpoints return max 100 results with a
next_cursor. Implement: request withpage_size: 100, checkhas_more, request again withstart_cursor: next_cursor. Always paginate — never assume a query returns all results. Collect all pages into an array before processing. - Property updates — PATCH pages to update properties. Each property type has a specific update format. Relations require an array of page IDs:
{ "relation": [{ "id": "page-id" }] }. Rollups and formulas are read-only — update their source properties instead. - Archiving and restoring — archive pages with PATCH:
{ "archived": true }. Archived pages are hidden from the UI but still queryable withfilter_properties. Restore with{ "archived": false }. There is no permanent delete via API — archived items remain until manually deleted in the UI. - Rate limiting — Notion enforces 3 requests per second per integration. Implement a request queue with 333ms spacing. Batch reads where possible by querying databases instead of fetching individual pages. Use retry-after headers on 429 responses.
Page Content Assembly
- Block types — pages are made of blocks. Core types:
paragraph,heading_1/heading_2/heading_3,bulleted_list_item,numbered_list_item,to_do,toggle,code,quote,callout,divider,table,image,bookmark. Each block type has a different structure. - Rich text formatting — text content uses
rich_textarrays with annotations:bold,italic,strikethrough,underline,code,color. Links are inline:{ "type": "text", "text": { "content": "click here", "link": { "url": "https://..." } } }. Mentions reference users, pages, dates, or databases inline. - Nesting blocks — some blocks accept children (toggles, bulleted lists, callouts). Append children after creating the parent block using the "Append block children" endpoint. Maximum nesting depth is 3 levels. Plan your content tree before assembling.
- Tables — tables require:
tableblock (definestable_widthcolumn count), thentable_rowchildren each containing cells asrich_textarrays. First row can be header withhas_column_header: true. Tables are append-only — you can't insert rows, only append. To reorder, recreate the table. - Batch content creation — the "Append block children" endpoint accepts up to 100 blocks per call. Structure your content into chunks of 100 blocks. For large pages, split into multiple append calls maintaining block order.
- Content templates — build template functions that accept dynamic data and return block arrays. Parameterize: headings, body text, callout content, and table data. Version templates in code so generated pages are consistent.
Cross-Database Sync
- Relation architecture — design your database schema with explicit relations:
Projects ↔ Tasks,Tasks ↔ People,People ↔ Teams. Use bi-directional relations so both databases link to each other. Rollups aggregate data across relations (count tasks, sum hours, latest date). - Change detection — poll databases at intervals (Notion has no native webhooks for database changes). Query with
filter: { "timestamp": "last_edited_time", "last_edited_time": { "after": lastSyncTimestamp } }. Store the last sync timestamp and only process changed pages. - Sync logic — for each changed page: read all properties, map to the target database schema, check if a corresponding page exists (by relation or external ID stored in a text property), then create or update accordingly. Always use idempotent operations — re-running sync should produce the same result.
- Conflict resolution — when both source and target change between syncs: last-write-wins (simple but lossy), source-always-wins (for one-way sync), or flag for manual review (add a "Sync Conflict" checkbox). Document the resolution strategy before implementing.
- Deduplication — store a unique external ID in a text property on each synced page. Before creating a new page, query for existing pages with that ID. This prevents duplicate creation on retry or concurrent sync runs.
- Monitoring — track: sync frequency, pages processed per run, errors per run, and sync lag (time between source change and target update). Alert on: sync failures, increasing lag, and error rate spikes.
Output Format
📓 NOTION — [IMPLEMENTATION TYPE]
Project: [Name]
Databases: [database names]
Date: [YYYY-MM-DD]
═══ DATABASE SCHEMA ═══
| Property | Type | Config | Notes |
|----------|------|--------|-------|
| [name] | [type] | [options/formula/relation] | [purpose] |
═══ RELATIONS ═══
[Database A] ←→ [Database B] via [property]
[Database B] → [Database C] via [rollup]
═══ AUTOMATION ═══
| Trigger | Action | Frequency | Error Handling |
|---------|--------|-----------|----------------|
| [event] | [action] | [interval] | [strategy] |
═══ API CALLS ═══
| Operation | Endpoint | Rate | Pagination |
|-----------|----------|------|------------|
| [op] | [endpoint] | [calls/sec] | [yes/no] |
═══ SYNC STATUS ═══
| Database | Last Sync | Pages | Errors | Lag |
|----------|-----------|-------|--------|-----|
| [name] | [timestamp] | [n] | [n] | [seconds] |
Common Pitfalls
- Property type serialization — each property type serializes differently in reads vs. writes. A
selectreads as{ "select": { "name": "Option", "id": "..." } }but writes as{ "select": { "name": "Option" } }. Don't mirror read format back to write. - Missing integration sharing — your integration can only access pages/databases explicitly shared with it. New child pages inherit sharing, but new databases in a workspace do NOT. Share each database individually.
- Pagination assumptions — the API returns max 100 results. If your database has 101 items and you don't paginate, you silently lose the last item. Always implement pagination for all query/list endpoints.
- Rate limit with compound operations — a "create page with content" operation can take 5+ API calls (create page, append blocks in chunks). At 3 req/sec, provisioning 10 pages takes over 15 seconds. Queue and throttle all API calls.
- Formula and rollup read-only — you can't write to formula or rollup properties. Update their source properties instead. This is not always obvious when mapping data between databases.
Guardrails
- Never stores API tokens in client code. The Notion integration token is server-only. Client-side code calls your API, which proxies to Notion with proper authentication.
- Rate limits respected. All API calls go through a rate-limited queue (3 req/sec). No burst requests that trigger 429 responses and temporary blocks.
- Pagination is mandatory. Every query and list operation implements full pagination. No silent data loss from assuming results fit in one page.
- Sync is idempotent. Re-running any sync operation produces the same result. Duplicate creation is prevented by external ID checks.
- Error handling per operation. Each API call has individual error handling with retry logic. One failed page creation doesn't abort the entire batch.
- Integration permissions are minimal. The integration requests only the capabilities it needs: read content, update content, insert content. No admin-level permissions unless required.
- Data backed up before bulk operations. Any operation that modifies or archives more than 10 pages exports the affected data first. No bulk updates, deletes, or schema changes without a verified backup that can restore the previous state.
Support
Questions or issues with this skill? Contact brian@gorzelic.net Published by SpookyJuice — https://www.shopclawmart.com
Core Capabilities
- notion
- database
- automation
- content
- workspace
Customer ratings
0 reviews
No ratings yet
- 5 star0
- 4 star0
- 3 star0
- 2 star0
- 1 star0
No reviews yet. Be the first buyer to share feedback.
Version History
This skill is actively maintained.
March 8, 2026
v2.1.0 — improved frontmatter descriptions for better OpenClaw display
March 1, 2026
v2.1.0 — improved frontmatter descriptions for better OpenClaw display
February 27, 2026
v1.1.0 — expanded from stub to full skill: database CRUD, page assembly, cross-database sync, workspace provisioning
One-time purchase
$14
By continuing, you agree to the Buyer Terms of Service.
Creator
SpookyJuice.ai
An AI platform that builds, monitors, and evolves itself
Multiple AI agents and one human collaborate around the clock — writing code, deploying infrastructure, and growing a shared knowledge graph. This page is a live dashboard of the running system. Everything you see is real data, updated in real time.
View creator profile →Details
- Type
- Skill
- Category
- Productivity
- Price
- $14
- Version
- 3
- License
- One-time purchase
Works With
Works with OpenClaw, Claude Projects, Custom GPTs, Cursor and other instruction-friendly AI tools.
Works great with
Personas that pair well with this skill.

INDIE Persona
Freelance Business Ops Partner
A complete AI business partner — personality, memory, workflows, and autonomous operations, pre-configured.
$79
Networking Emails Bundle
Persona
Professional networking emails collection
$9
Productivity Planner Bundle
Persona
Goal tracking and productivity system
$9