A new role lands on your desk Monday morning. You have 800 candidates in your CRM from prior searches over the last 18 months. Some of them are a strong fit for this new role and don't know it. Most of them aren't. Without a tool to surface the first group, you reach out to the same five people you remember best and start sourcing fresh.
Open Claude Cowork, drop your candidate-pool export into it along with the JD, and ask which of the 800 are worth a re-engagement call. Twelve minutes later you have a shortlist of 30 with notes on why each one fits. The hiring manager sees real progress on day one, not on day five.
That's the difference between Claude in 2026 and ChatGPT in 2024. ChatGPT answers questions. Claude Cowork runs workflows against your actual files, on your actual machine, and finishes them while you make coffee. This guide walks through six of the workflows recruiters use most, the prompts behind each one, and where Claude wins or loses against the alternatives.
What "Claude" actually means in 2026
Anthropic ships Claude as four different products, and the answer to "is Claude good for recruiting?" depends on which one you mean.
Claude.ai is the web chat interface. It's what most recruiters tried first. You paste text, it writes text back. Good for one-off rewrites, useless for batch work.
Claude Code is a command-line tool for developers. It reads code, writes code, and runs in a terminal. Some technical recruiters use it for GitHub deep-dives, but the setup curve is steep if you don't already work in a terminal.
Claude Cowork, launched in January 2026, is the one that matters most for recruiters. It's a desktop tool that lets Claude operate on your local files, run multi-step workflows, and produce artifacts you can hand to a hiring manager. It does not require coding. You point it at a folder, type what you want, and it goes.
Anthropic API is the developer-facing version that powers third-party tools. Most recruiters won't touch it directly, but it's why your ATS or sourcing platform may already be running on Claude under the hood.
The rest of this guide is about Claude Cowork specifically. When we say "Claude" below, that's what we mean. If you're still mapping the broader landscape of AI tools that recruiters use, our guide to AI recruiting tools by category starts there.
The 6 workflows that save the most time
Each of the workflows below has been tested on real recruiter desks. They share a structure: drop a file or folder into Cowork, paste a prompt, get a working output. None of them require knowing how to code. All of them assume you have a Pro plan ($20 a month) or higher.
1. JD-to-candidate-brief generator
Most rough JDs from hiring managers are too vague to source against. Before you write the search, you need a candidate brief: must-have skills, nice-to-haves, deal-breakers, ideal background, and the salary range that fits.
Workflow. Drop the rough JD into Cowork. Paste this prompt:
Read the attached JD. Generate a sourcing-grade candidate brief in this format: must-have skills (max 5), nice-to-haves (max 5), deal-breakers, ideal current title and company type, expected years of experience, comp band guess based on title and location, and three Boolean search strings I can use on GitHub, LinkedIn, and Glozo. Flag any part of the JD that is too vague to source against and ask me one question per flag.
Output. A structured brief plus targeted questions for the hiring manager. The questions matter. They turn the same conversation that took an hour into a focused 15-minute clarification.
Time saved. From 60 minutes of brief-writing to roughly 10 minutes of prompt and review.
2. Pool re-engagement: match a new role to candidates you already know
Every recruiter has a pool of past candidates who almost-but-didn't fit a previous role. Some of them are exactly right for the role on your desk today and don't know it. The bottleneck has always been time: you can't manually re-read 800 candidate notes for every new req. Cowork can.
This is also the workflow with the cleanest compliance profile. You're not scoring an applicant funnel for a hiring decision. You're deciding which of your existing relationships to reach out to about a new opportunity. The candidate decides whether to engage. The hiring manager still makes the hiring call later, in their normal process.
Workflow. Export your candidate-pool data from your ATS or CRM as a CSV. The export should include name, current title, last role discussed, skills/tags, last contact date, and any notes you've kept. Drop the CSV into Cowork along with the new JD. Paste this prompt:
Read the attached candidate-pool CSV and the attached JD for a new role. Surface the 30 candidates from the pool who are the best potential fit for this role and worth a re-engagement call. For each, output: name, current title, last contact date, top three reasons this role might fit them now, top one reason it might not, and a suggested opening line for the outreach that references something specific from their notes. Return as a CSV. Do not surface candidates based on protected attributes. If the pool is too thin to produce 30 strong matches, return however many real matches you find and tell me how many.
Output. A shortlist of 30 (or fewer real matches) in CSV form, ready to feed into Workflow 4 below for personalized outreach. The recruiter still chooses who to call.
Time saved. From "I can't review my whole pool, I'll just remember the top five" to a real review of 800 in 10 minutes. The bigger win is the candidates you would have forgotten about getting a call.
Privacy note. The candidates in your pool should have a privacy notice on file that names AI-assisted processing of their data, plus a documented retention policy. This is what most state privacy laws (CCPA, Virginia, Colorado, and the rest of the 2023-to-2026 wave) actually require. See the data privacy subsection below for the practical setup.
3. GitHub deep-dive for technical candidates
Tech recruiters who don't write code still need to evaluate engineering candidates from their GitHub history. This is where Claude beats ChatGPT decisively, because Cowork can clone a repository locally and read the actual code, not just the README.
Workflow. Get the candidate's GitHub username. In Cowork, paste:
Look up GitHub user [username]. For their three most-starred and three most-recently-active repositories, summarize: project purpose in one sentence, the candidate's actual contribution share (not just commits, but ownership of meaningful files), the tech stack, code quality signal (tests, structure, documentation), and any patterns that would matter to a senior engineer hiring manager. End with a recruiter-friendly summary I can paste into a candidate note.
Output. A digest you can read in 90 seconds. The "actual contribution share" piece is what most recruiters miss. A candidate with 800 commits to someone else's project may have written almost none of the meaningful code. Cowork can tell the difference because it reads the actual files, not just the GitHub UI.
Time saved. From 30 minutes of clicking around GitHub to about 2 minutes of read time.
4. Personalized outreach at scale
The single biggest reason recruiters get ignored on LinkedIn is a generic message. The single biggest reason they don't personalize is time. Cowork closes that gap.
Workflow. Export your candidate list as a CSV (from Glozo, your ATS, a free resume search tool, or a spreadsheet). The CSV needs at minimum: name, current title, current company, and one personal hook per candidate (recent project, recent post, talk they gave, repo they shipped). Drop the CSV into Cowork. Paste:
For each row in the attached CSV, write a personalized outreach message under 90 words. The role is [paste 2-line role pitch]. Match the candidate's apparent seniority and tone. Open with a specific reference to their hook column, no flattery, no "I came across your profile." Close with one specific next step (a 20-minute conversation by Tuesday, not "open to chat"). Output as a CSV with the original columns plus a "message" column.
Output. A CSV with a personalized message per candidate, ready to paste into your sequencer or your LinkedIn outreach tool.
Time saved. From 8 to 12 minutes per personalized message manual to about 20 seconds per message at batch.
5. Interview synthesis from full transcripts
Most note-taking happens during the interview, which means the recruiter or hiring manager is dividing attention between the candidate and the keyboard. Cowork's 200,000-token context window is long enough to read a full 90-minute transcript and produce a structured report.
Workflow. Get the transcript file from your interview tool (most platforms export to .txt or .vtt). Drop it into Cowork with this prompt:
Read the attached interview transcript. Output a structured report: candidate strengths (max 5, with the timestamp where each was demonstrated), candidate concerns (max 5, with timestamps), points that might warrant follow-up, technical depth observations if applicable, communication style observations, the three best follow-up questions for the next round, and a one-paragraph summary of what was discussed. Do not produce a hiring recommendation. The recruiter and hiring manager will make that call separately.
Output. A structured set of notes with timestamps, ready to log in your ATS or share with the hiring manager. The timestamps let anyone verify a claim in the original transcript instead of arguing from memory. The output is recruiter notes, not a decision.
Time saved. From 25 minutes of writing notes from memory to about 4 minutes of read time on the structured output.
Privacy note. Interview transcripts are personal data, often including details candidates didn't expect to be machine-processed. Keep these notes as an aid for the recruiter and hiring manager, not as the basis for the decision itself. If you process transcripts at scale, the data privacy subsection below covers the Anthropic-tier and disclosure-policy questions worth resolving before adopting this workflow.
6. Pipeline reporting through Cowork artifacts
Cowork can produce interactive HTML files (Cowork calls them artifacts) that update from your local data. For a recruiter who tracks pipeline in a spreadsheet, this means a Monday-morning dashboard the hiring manager can actually open.
Workflow. Have your pipeline tracking file (CSV or Excel) on your desktop. Paste:
Read the attached pipeline file. Generate a one-page interactive HTML dashboard with: count of candidates by stage, time-in-stage by role, recruiter activity by week, top three roles by pipeline health (defined as healthy = 5+ active candidates with conversations in the last 7 days), and any candidate flagged in the file as at-risk of dropping. Use a clean professional design. The dashboard should re-render with updated data when I drop a new file in.
Output. A self-contained HTML file you can email or share. It's the kind of artifact that earns trust with hiring managers because it shows the work, not just the result.
Time saved. From two hours building a deck for the weekly hiring meeting to about 10 minutes of prompt-and-tweak.
A note on compliance
US recruiters running AI-assisted workflows in 2026 face two separate compliance pillars: bias and privacy. They have different rules, different regulators, and different remedies. This section covers what to know on both. None of it is legal advice. If you process candidate data at scale, talk to a privacy lawyer about your specific setup.
Bias and AEDT laws (US)
Liability for biased hiring outcomes sits with the recruiter and the employer, not with the AI vendor. NYC Local Law 144 (effective 2023) requires an annual bias audit by an independent auditor for any "automated employment decision tool" used to substantially assist hiring decisions for NYC-based roles or candidates. EEOC technical assistance from 2023 confirms that Title VII applies to AI hiring tools. Illinois HB 3773 (effective January 2026) adds candidate-disclosure requirements. Colorado's AI Act adds impact-assessment requirements for high-risk hiring AI.
Claude has one design choice that helps: it tends to refuse prompts that filter on protected attributes (race, gender, age, religion, national origin, disability), even when they're disguised as "culture fit" or "language native." That removes one easy way to get into trouble. ChatGPT will more often execute the prompt without pushing back.
That said, the refusal pattern is a guardrail, not a compliance solution. A neutral prompt run on a non-representative input set still produces disparate impact. AEDT laws apply regardless of which model you used. The workflows in this guide are written to avoid the highest-risk patterns (notably, Workflow 2 is reverse-matching against your own pool, not scoring an applicant funnel; Workflow 5 produces notes, not hiring recommendations). If your workflow does score or rank candidates from an applicant funnel at scale, especially for NYC roles or under Illinois disclosure rules, you need a real bias-audit policy and you may need an independent auditor.
Data privacy
Resumes, transcripts, and candidate notes are personal data. Putting them through any cloud AI tool means thinking about how that data is stored, who has access, and whether the model trains on it. Three concrete things matter for a US recruiter using Claude in 2026.
Turn off "Improve Claude for everyone" before you put a single resume in. This is the most actionable item in the article. On a new Pro or Max account, the toggle is on by default. While it's on, your conversations are eligible for model training and retained for up to 5 years. While it's off, retention drops to 30 days and training stops. Settings → Privacy → "Improve Claude for everyone" → toggle off. Team and Enterprise customer data is never used for training regardless of the toggle, but consumer-tier accounts default to opt-in. Most recruiters miss this.
State-level privacy laws apply to candidate data, even if the candidate isn't a customer. California's CCPA / CPRA covers candidate data of California residents and requires notice at collection, the right to know what's processed, the right to delete, and a documented retention policy. The 2023-to-2026 wave of state laws (Virginia, Colorado, Connecticut, Utah, Texas, Indiana, Tennessee, Oregon, Montana, Iowa, Delaware, New Jersey, New Hampshire, Maryland, Minnesota) extends similar requirements across most of the country. None of these are GDPR-strict, but a recruiter putting candidate resumes into a personal Pro account without disclosure to the candidate is in territory that wasn't legal in 2023 and is more clearly not legal in 2026. The fix is not complicated: a candidate-facing privacy notice that names AI processing, plus a retention policy, plus the toggle above.
Pro and Max plans run on Consumer Terms; Team and Enterprise run on Commercial Terms. This distinction matters even for US-only recruiter desks. Commercial Terms include a Data Processing Agreement with named subprocessors, formal data deletion timelines, and contractual data-handling commitments that match what most agency clients now ask for in their vendor agreements. If you sign client contracts that include data-handling clauses (and most enterprise clients do), Team plan is the cleaner path. Pro is fine for solo recruiters working solo, with no client data agreements in place.
If your sourcing reaches into the EU, UK, Canada, or Quebec, you need a Team or Enterprise plan, not Pro. GDPR Article 28 requires a Data Processing Agreement with Standard Contractual Clauses for the cross-border transfer to a US-based AI processor. That DPA is included in Commercial Terms only. GDPR Article 22 also restricts solely-automated hiring decisions; the workflows in this guide keep the recruiter in the decision seat, but if you change them to let Claude make the call, Article 22 becomes a problem on EU-touching cases. For most US recruiters this paragraph won't apply. For the subset that source globally, it does.
Pairing Claude Cowork with Glozo: the data layer your agent is missing
Claude Cowork is a strong agent layer. It runs workflows, reads files, generates outputs, makes artifacts. What it doesn't have is talent data. Without good candidate data and live market data, every Cowork workflow is operating on whatever scraps you point it at.
That's where pairing it with Glozo matters.
Glozo gives Cowork three things it can't get on its own. The first is enriched candidate profiles, aggregated from 30+ sources and processed through the proprietary Skill Graph (which converts experience into a weighted skill model rather than keyword matches). The second is the Market Compensation Estimate, a salary range per candidate based on a statistical model trained on 10M+ data points monthly. The third is the "Open to Offers" signal, a behavioral model that surfaces passive candidates who are likely receptive, before you spend a credit reaching out.
The practical workflow today looks like this:
- Run a sourcing search in Glozo and identify a candidate list. Use the "Open to Offers" filter and the comp band that fits the role.
- Export the list as a CSV. The export includes enriched fields, not just name and email.
- Drop the CSV into Claude Cowork along with the JD and your outreach pitch.
- Run workflows 2 through 4 from this guide: Cowork matches the list against your active roles, does the GitHub deep-dive on the top tier, and writes personalized outreach at scale.
- Cowork creates an interactive HTML dashboard (workflow 6) so the hiring manager can see pipeline health on Monday morning.
This is a manual handoff today (browser plus export plus Cowork). A native MCP integration is in development, which will let Cowork pull Glozo data on demand without the export step. Until then, the manual workflow already collapses what used to be a 6-hour sourcing-and-outreach run into something a recruiter can finish before lunch.
Open Glozo to source your next list →
Cost and setup
Cowork runs on the standard Claude subscription tiers. For most solo and agency US recruiters, Pro at $20 per month is the right starting point. It covers 20 to 30 workflows a week, which is more than enough for a single-recruiter desk. Turn off the "Improve Claude" training toggle in settings before you process any candidate data.
Max at $200 per month covers heavy batch use. Right for high-volume desks running multiple full sourcing-to-outreach cycles per week.
Team at $20 per seat per month is the same per-seat price as Pro but runs on Commercial Terms, which include a formal Data Processing Agreement, named subprocessors, and contractual data-handling commitments. For any recruiter who signs client contracts with vendor data-handling clauses, runs a multi-recruiter agency, or sources candidates from outside the US, Team is the better default at the same price. Pro stays the right pick only for true solo, US-only recruiters with no client-data contracts in place. Note that Team has a seat minimum, so verify the current minimum on Anthropic's pricing page before subscribing.
Enterprise (custom pricing) adds SSO, audit logs, custom retention, and named contracts. Right for larger agencies or in-house TA teams with specific data-handling requirements from clients or hiring managers.
Setup time is about 10 minutes: download Cowork, sign in with your Anthropic account, and connect a folder on your desktop. There's no command line, no plugin marketplace to navigate before you start, and no configuration file to write.
For comparison, a self-hosted open-source agent like OpenClaw runs at about $0.10 to $0.50 in API fees per query with no per-seat license. Because OpenClaw runs on your own machine, no candidate data ever leaves your infrastructure. That's the strongest privacy posture of the three tools, but it costs engineering time to set up and maintain. ChatGPT Plus is $20 a month, the same as Cowork Pro, with similar Consumer-Terms data handling. For a solo recruiter, Cowork's no-config default is usually the right tradeoff.
When Claude wins, when ChatGPT wins, when OpenClaw wins
The honest answer is that all three tools have different strengths. Picking the right one depends on what you actually need.
| Use case | Best tool | Why |
|---|---|---|
| Pool re-engagement against a new role | Claude Cowork | Reads a candidate-pool CSV alongside a JD, surfaces real fits at scale. ChatGPT requires manual paste and loses context fast. |
| Personalized outreach at batch | Claude Cowork | Reads CSV, writes CSV, follows tone instructions reliably. |
| One-off rewriting (single JD, single email) | ChatGPT or Claude.ai (web) | Both work; pick whichever you already pay for. |
| Image generation for employer branding | ChatGPT | DALL-E built in. Claude has no native image generation. |
| Voice or video output | ChatGPT | Voice mode and Sora are bundled in Plus. Claude doesn't have these. |
| GitHub deep-dive on technical candidates | Claude Cowork or Claude Code | Reads actual code, not just the README. ChatGPT can browse but can't analyze code at the same depth. |
| Self-hosted privacy (data never leaves your machine) | OpenClaw | Open-source, runs locally. Claude is cloud-based. ChatGPT is cloud-based. |
| Lower-risk default for compliance-sensitive work | Claude Cowork | Refusal pattern on protected-attribute prompts is the strongest of the three. Not a compliance solution, but a useful guardrail. |
| Interview transcript synthesis (90-minute calls) | Claude Cowork | 200K context handles full transcripts; ChatGPT context is shorter. |
| Custom multi-step workflows you want to own | OpenClaw | Modular skills, full customization, but requires engineering setup. |
For most US-based recruiters running a generalist or tech req desk in 2026, Claude Cowork covers the largest share of the time-consuming work. ChatGPT is the right second tool for image and voice. OpenClaw is worth the setup cost only if data sovereignty or deep customization is a hard requirement.
Limitations to plan around
A few things Claude Cowork doesn't do, that you should know before committing.
Cloud, not local. Cowork runs against Anthropic's servers. Files you process are sent through their infrastructure. For most recruiters this is fine and matches what they already do with their ATS. For agencies with strict candidate-data agreements with clients, the contract terms need a check before adoption.
Anthropic moves fast. Claude Cowork launched in January 2026. New features, model updates, and surface changes ship every few weeks. A workflow that works today may need a small prompt tweak in three months. This is true of every AI product in 2026, but worth noting.
External APIs need MCP setup. Cowork can talk to external services (your ATS, your sourcing platform, Glozo) only when an MCP integration is configured. Some integrations exist out of the box, others require setup, others are still being built. For now, plan on file-based handoffs (CSV export and import) for tools you can't connect natively.
Cowork is not an ATS. It runs workflows on top of your candidate data, but it doesn't replace a tracking system. If you don't have one yet, our guide to open-source ATS tools covers options for solo and agency recruiters who want something they can host themselves.
No image or voice output. If your recruiter workflow involves employer-branding images or voice notes for candidates, Cowork doesn't generate either. Use ChatGPT for those steps and Cowork for the analysis-and-text steps.
Where to start
If you're new to Cowork, the fastest path to value is workflow 2 (pool re-engagement) on your next new role. Export your candidate pool, drop it in alongside the JD, and let Cowork surface the 30 candidates worth a re-engagement call. It's the workflow that produces the most obvious time savings on day one and the one that builds the strongest case for paying $20 a month for Pro.
Once that's working, layer in workflow 4 (personalized outreach at scale) the next time you run a sourcing list. From there, the rest of the workflows fall into place as you hit the underlying problem they solve.
Open Glozo and pull your next sourcing list →
The Cowork agent layer is more useful when the data feeding it is good. Glozo's enriched profiles, "Open to Offers" signal, and Market Compensation Estimate are the data layer most Cowork-powered recruiter workflows need to be accurate at the candidate level.

