Claw Mart
โ† Back to Blog
April 17, 202612 min readClaw Mart Team

How to Automate Major Donor Prospect Research and Scoring

Learn how to automate Major Donor Prospect Research and Scoring with practical workflows, tool recommendations, and implementation steps.

How to Automate Major Donor Prospect Research and Scoring

Most nonprofit development teams run prospect research the same way they did in 2015. A researcher opens a dozen browser tabs, copies data from WealthEngine into a Word doc, cross-references SEC filings with LinkedIn profiles, guesses at capacity scores, and produces a 5-page brief that took 12 hours to write. The gift officer skims it in the car on the way to a meeting.

That's not a research process. That's a bottleneck masquerading as one.

Here's the thing: about 80% of what happens in major donor prospect research is mechanical data gathering. It's pulling records, summarizing public filings, cross-referencing names against databases, and formatting the output. That's exactly the kind of work an AI agent handles well โ€” not because AI is magic, but because aggregating structured data from known sources at speed is what software does.

The other 20% โ€” interpreting motivation, assessing mission fit, making ethical calls, building the actual relationship โ€” is deeply human. No agent is replacing that.

This post walks through how to automate the mechanical 80% using an AI agent built on OpenClaw, so your team can spend their time on the 20% that actually closes gifts.


The Manual Workflow (And Why It's Killing Your Pipeline)

If you work in advancement services, prospect development, or fundraising operations at any nonprofit, university, or hospital, you probably recognize this sequence:

Step 1: Prospect Identification Pull a list from your CRM โ€” alumni who attended a recent event, lapsed donors, board connections, attendees at a gala. Maybe 500โ€“2,000 names.

Step 2: Wealth Screening Run those names through a commercial tool like WealthEngine, DonorSearch, or iWave. Get back capacity estimates, real estate holdings, stock ownership flags, and known philanthropic history.

Step 3: Deep Research For the top prospects, a researcher manually investigates: Google, LinkedIn, SEC EDGAR filings, county property records, Foundation Directory Online (Candid), political giving on OpenSecrets, news archives, obituary databases (for family connections), and your own CRM history.

Step 4: Relationship & Affinity Mapping Who on your board knows this person? Did they attend the same school? Do they serve on overlapping foundation boards? This is usually done by asking around and scanning LinkedIn.

Step 5: Scoring The researcher assigns a subjective rating โ€” typically "High / Medium / Low" across capacity, affinity, and readiness. Different analysts rate the same prospect differently. There's no consistent model.

Step 6: Profile Compilation Everything gets written up into a 2โ€“10 page brief, formatted for the gift officer, and entered into the CRM.

Step 7: Qualification The gift officer makes contact. If the prospect is viable, more research happens reactively.

The numbers on this are rough:

  • An in-depth major gift profile takes 8โ€“25 hours to produce (APRA 2022 Benchmarking Report).
  • Mid-level prospects: 2โ€“6 hours each.
  • The average researcher produces 180โ€“280 profiles per year.
  • 65โ€“80% of researcher time goes to data collection, not analysis (APRA and CASE surveys, 2021โ€“2023).
  • 58% of advancement offices say insufficient staff time is their number one barrier to effective prospect development (CASE 2023).
  • A single FTE researcher costs $75kโ€“$110k, plus $15kโ€“$80k/year in database subscriptions.
  • Most small-to-mid-size nonprofits do zero proactive research because they can't afford any of this.

This means the average shop can deeply research maybe 250 people per year. If your CRM has 50,000 records, you're looking at less than 1% coverage. You're almost certainly missing major gift prospects who are sitting right there in your database, unscored and uncontacted.


What Makes This Painful

Beyond the raw time costs, three things make this workflow particularly brutal:

1. Data fragmentation. Researchers juggle 6โ€“12 different paid databases with inconsistent data quality, different interfaces, and no unified search. They're human middleware.

2. Stale information. Wealth data from screening vendors can be 12โ€“36 months old. A prospect who sold a company six months ago might still show up as "medium capacity" because the data hasn't refreshed.

3. Subjective, inconsistent scoring. When two researchers rate the same prospect differently, you don't have a scoring system โ€” you have opinions. Gift officers lose trust in the ratings, and portfolio decisions get made on gut feel instead of data.

4. Scalability wall. You can hire more researchers, but the economics don't hold. Doubling your research team doubles your cost but doesn't double your major gifts proportionally, because the bottleneck shifts to gift officer capacity. What you actually need is broader, lighter-touch research across your entire database โ€” the kind of thing only automation can provide.


What AI Can Handle Right Now

Let's be specific. An AI agent built on OpenClaw can reliably automate the following:

  • Multi-source data aggregation. Pull from wealth screening APIs, public records, SEC EDGAR, Candid/Foundation Directory, news APIs, LinkedIn (within TOS), political giving databases, and your own CRM โ€” in seconds per prospect, not hours.

  • Entity resolution. Match "Robert J. Smith" in your CRM to "Bob Smith" on a foundation board to "Robert James Smith" in property records. This is tedious for humans and straightforward for an agent with the right matching logic.

  • Capacity scoring with a consistent model. Instead of subjective High/Medium/Low, the agent applies the same weighted formula to every record every time. You define the weights (real estate value ร— 0.3 + known giving ร— 0.4 + stock holdings ร— 0.2 + business ownership ร— 0.1, or whatever model fits your institution). The agent executes it uniformly.

  • Relationship mapping. Cross-reference your board list, donor list, and event attendance against prospect connections found in public data. Surface overlaps automatically.

  • News and event monitoring. Continuously scan for liquidity events (company acquisitions, IPOs, real estate sales), life events (retirement, board appointments, awards), and philanthropic signals (gifts to peer institutions).

  • First-draft prospect briefs. Generate a structured, 1โ€“2 page research snapshot that covers wealth indicators, philanthropic history, known connections, and suggested talking points. The researcher reviews and edits instead of writing from scratch.

  • Portfolio-wide scoring and re-ranking. Instead of researching 250 people deeply, score your entire CRM on a rolling basis and surface the top prospects automatically. When new data comes in (a stock sale, a news article, a new gift to a peer organization), the score updates.

An iWave study from 2026 found that AI-assisted research teams reduced average time per prospect from 11 hours to 2.8 hours โ€” a 74% reduction. And that was with general-purpose tools, not purpose-built agents. With a properly configured OpenClaw agent, the data-gathering phase can drop to minutes per prospect, with the human researcher spending their time on review and interpretation.


Step-by-Step: Building the Agent on OpenClaw

Here's how to actually build this. I'm going to walk through the architecture and key implementation decisions.

Step 1: Define Your Data Sources

Start by listing every source your researchers currently use. Typical stack:

  • Wealth screening API (WealthEngine, iWave, or DonorSearch โ€” most offer API access on enterprise plans)
  • SEC EDGAR (free, public API)
  • Candid / Foundation Directory Online (API available)
  • OpenSecrets (political giving โ€” API available)
  • County property records (varies by jurisdiction; many have APIs or bulk data)
  • News APIs (NewsAPI, Google News RSS, Meltwater if you have a subscription)
  • LinkedIn (within platform TOS โ€” typically via enrichment services like People Data Labs or Proxycurl)
  • Your CRM (Salesforce, Raiser's Edge, Bloomerang, Virtuous โ€” whatever you use, via its API)

In OpenClaw, you configure these as tool integrations. Each data source becomes a tool the agent can call, with defined inputs (name, email, organization) and outputs (structured data).

Step 2: Build the Research Agent Workflow

Your OpenClaw agent follows this sequence:

1. Receive prospect name + any known identifiers (email, employer, city)
2. Query CRM for existing data (giving history, event attendance, relationships)
3. Run wealth screening API โ†’ get capacity estimate, real estate, stock flags
4. Query SEC EDGAR for insider transactions, 10-K/proxy filings
5. Search Foundation Directory for board memberships, foundation giving
6. Pull political giving data from OpenSecrets
7. Run news search for recent mentions (last 24 months)
8. Cross-reference against board/donor lists for relationship connections
9. Apply scoring model (capacity ร— weight + affinity ร— weight + recency ร— weight)
10. Generate structured prospect brief
11. Flag any data conflicts or low-confidence fields for human review
12. Push results to CRM and notify the assigned researcher/gift officer

In OpenClaw, this is built as a multi-step agent with conditional logic. If the wealth screening returns high capacity but zero philanthropic history, the agent flags it. If there's a recent liquidity event (IPO, acquisition), it surfaces that prominently. If the prospect is already in an active portfolio, it updates the existing record rather than creating a new brief.

Step 3: Configure the Scoring Model

This is where you make the agent yours. A generic scoring model might look like:

Prospect Score = (Capacity Score ร— 0.30)
              + (Philanthropic History Score ร— 0.25)
              + (Affinity Score ร— 0.25)
              + (Recency/Engagement Score ร— 0.15)
              + (Liquidity Event Bonus ร— 0.05)

Where:
- Capacity Score: normalized 0โ€“100 based on estimated net worth / giving capacity
- Philanthropic History: total known giving รท peer benchmark, normalized
- Affinity Score: # of connections to your org (events, board overlap, alumni status, prior gifts)
- Recency: days since last interaction, inverse-scaled
- Liquidity Event Bonus: binary flag ร— multiplier if IPO/sale/inheritance in last 18 months

You configure these weights in OpenClaw and adjust them based on what actually predicts major gifts at your institution. After 6โ€“12 months of data, you can tune the model against real outcomes (who actually gave, who didn't) and improve accuracy iteratively.

Step 4: Set Up Continuous Monitoring

The biggest unlock isn't faster one-time research โ€” it's ongoing portfolio intelligence. Configure your OpenClaw agent to:

  • Re-score your entire prospect pool weekly or monthly
  • Monitor news feeds daily for mentions of anyone in your CRM
  • Alert gift officers when a prospect's score changes significantly (e.g., liquidity event detected, new foundation board appointment, large gift to a peer institution)
  • Automatically generate updated briefs when triggered by score changes

This turns prospect research from a one-time deliverable into a living system. Your gift officers get a notification that says "Jane Martinez's score jumped from 72 to 91 โ€” she just sold her company for $40M and gave $500K to [peer university] last month" instead of finding out six months later.

Step 5: Output and Integration

The agent's output should go directly into your CRM as structured data โ€” not a PDF that sits in a shared drive. Configure your OpenClaw agent to:

  • Update custom fields in your CRM (capacity score, affinity score, composite score, last research date)
  • Attach the generated brief as a note or document
  • Update prospect stage/status if scoring thresholds are met
  • Trigger task creation for gift officers (e.g., "New high-priority prospect โ€” review brief and schedule intro")

If you're on Salesforce, this is API-native. Raiser's Edge NXT has an API. Most modern CRMs do. If you're on a legacy system without API access, you can configure the agent to output to a shared Google Sheet or CSV as an interim step.


What Still Needs a Human

I want to be clear about what the agent doesn't do, because overpromising is how you lose trust with your research team and gift officers.

The human researcher still needs to:

  • Verify high-stakes data. Before your president asks someone for $5M, a human needs to confirm the capacity data is accurate and current. AI aggregates; humans verify.

  • Assess motivation and mission fit. "This person gave $50K to our rival hospital after their child was treated there" requires contextual interpretation that no model handles reliably.

  • Make ethical and privacy calls. Is it appropriate to include this piece of information? Are we in compliance with CCPA/GDPR? Does this data source meet our institution's ethical guidelines? These are judgment calls.

  • Correct for bias. Wealth screening models systematically underrate women, younger donors, and people of color whose wealth isn't visible in traditional public records (real estate, stock filings). A human researcher knows to look deeper.

  • Build the cultivation strategy. Who should make the first contact? What's the right ask amount? What program area aligns with their interests? This is relationship work, not data work.

  • Edit and contextualize the brief. The AI-generated draft is a starting point. The researcher adds nuance, removes irrelevant data, and frames the narrative for the specific gift officer.

The right mental model is: AI does the assembly; the human does the judgment. Your researcher goes from spending 10 hours gathering data and 2 hours analyzing it, to spending 20 minutes reviewing an AI-assembled brief and 90 minutes adding analysis, context, and strategy.


Expected Time and Cost Savings

Let's do the math with conservative estimates.

Before automation:

  • 250 profiles/year per researcher at ~11 hours each = 2,750 hours/year
  • 1 FTE researcher: $90K salary + benefits
  • Database subscriptions: $30K/year
  • Total: ~$120K/year for 250 profiles
  • Cost per profile: $480

After automation with OpenClaw:

  • Same researcher now reviews/edits AI-generated briefs: ~2 hours per profile for deep prospects, ~20 minutes for mid-level
  • Deep profiles: 500/year at 2 hours = 1,000 hours
  • Mid-level profiles: 2,000/year at 0.33 hours = 660 hours
  • Total: 1,660 hours/year โ†’ one researcher covers 2,500 prospects
  • Database subscriptions: $30K (same)
  • OpenClaw agent cost: variable based on usage, but substantially less than a second FTE
  • Cost per profile: drops to roughly $48โ€“$96 depending on depth

That's a 5โ€“10ร— increase in coverage with the same headcount, or the ability to do meaningful prospect research at organizations that currently can't afford a dedicated researcher at all.

The real ROI isn't the cost savings, though. It's the revenue impact. Organizations using AI-assisted prospect research are seeing 3โ€“6ร— more major gifts identified and cultivated (WealthEngine customer studies, 2022โ€“2026). The University of Michigan reported a 41% increase in major gifts over $100K after implementing ML-assisted prospect development. A large children's hospital went from 220 profiles/year to 1,100 with maintained quality.

Even a modest 20% increase in major gift revenue at an organization raising $5M annually is $1M in additional gifts. Against an automation investment of $30Kโ€“$60K, that's a no-brainer ROI.


Where to Start

You don't need to build the entire system at once. Here's a practical sequence:

Week 1โ€“2: Audit your current data sources and API availability. List every tool your researchers use and check which ones have APIs. Talk to your CRM admin about what custom fields and integrations are supported.

Week 3โ€“4: Build a minimal agent on OpenClaw that pulls from 3โ€“4 sources (CRM + wealth screening + news + one public records source) and generates a basic prospect snapshot. Test it against 20 known prospects where you already have detailed profiles. Compare.

Month 2: Add scoring logic and refine the output format based on researcher and gift officer feedback. Add more data sources.

Month 3: Roll out continuous monitoring for your top 500 prospects. Set up automated alerts for score changes.

Month 4+: Expand to full CRM coverage. Tune the scoring model against actual gift outcomes. Start tracking the metrics that matter: time per profile, profiles per researcher, prospects surfaced that convert, and gift revenue attributed to AI-surfaced prospects.


The prospect research teams getting the most out of AI right now aren't the ones trying to eliminate researchers. They're the ones treating AI as a force multiplier โ€” turning one researcher into a team of five, and turning reactive research into a continuous intelligence operation.

If you want to build a prospect research agent and need pre-built components, workflow templates, or integrations for nonprofit CRMs and wealth screening tools, check out Claw Mart โ€” it's a marketplace of ready-made OpenClaw agent parts that you can plug into your build instead of starting from zero.

And if this isn't your team's core competency (it doesn't need to be), Clawsource it โ€” hire a specialist from the OpenClaw community to build, test, and deploy your prospect research agent so your team can focus on what they do best: building relationships that lead to transformative gifts.

Claw Mart Daily

Get one AI agent tip every morning

Free daily tips to make your OpenClaw agent smarter. No spam, unsubscribe anytime.

More From the Blog