Claude Code Routines in Practice: How Indie Makers Replace Cron Jobs with Cloud AI Agents
Every morning, you spend 30 minutes cleaning up "last night's mess" — scanning CI logs, organizing PRs, triaging Sentry errors, updating Linear tickets. These tasks require judgment, not just data forwarding, so Zapier can't handle them. Previously, your only options were sitting at the computer yourself or building a complex GitHub Actions workflow that needed manual intervention every time something broke.
On April 14, 2026, Anthropic launched Claude Code Routines: a feature that keeps Claude Code sessions running on Anthropic's cloud infrastructure while your laptop stays closed. It's the first product to turn "the AI Agent itself into a scheduled task" — no server required, no DevOps background needed.
This article walks you through setting up your first Routine from scratch, clears up the three most common misconceptions, and provides three templates indie makers can use immediately.
TL;DR
- Routines are "scheduled Claude Code sessions" that run on Anthropic's cloud — when they hit a problem, they reason their way around it instead of halting like a fixed-script cron job
- Three tiers: Cloud Routines (cloud, no local access) / Desktop tasks (local) / /loop (current session) — 90% of the time, starting with /loop is enough
- Pro plan's 5/day is sufficient: high-frequency events (PR opens) use Webhook triggers, which don't count toward the daily cap; only fixed schedules use Schedule triggers
- Optimal combo: Routines handle "repetitive work requiring AI judgment," while Zapier/Make continues handling "mechanical data transfers"
- Before creating Routines, make sure you've set up your Claude Code CLAUDE.md and Skills architecture — this is the foundation that makes Routine prompts effective
What Are Routines, Really? Not Smart Cron Jobs — They're AI Agent Schedulers
Many people see "automated scheduled execution" and think cron jobs, but Routines differ from traditional cron jobs in a fundamental way.
Traditional cron job limitations: They execute fixed shell scripts. When an unexpected error occurs, they stop and wait for manual intervention. You can only tell them "run this script at 8 AM daily," but when the script encounters something unexpected (repo structure changed, API response format changed), it fails and notifies you to fix it.
How Routines differ: Each trigger actually starts a complete Claude Code session running on Anthropic's cloud infrastructure. Claude has full reasoning capabilities — when it encounters errors, it attempts alternative approaches, works around problems, or leaves clear explanations when it can't continue. The Register calls them "dynamic cron jobs," and that description is accurate: they execute "goals," not "fixed steps."
Key design insight: With traditional cron jobs, you give them "steps to execute." With Routines, you give them "goals to achieve" and let Claude decide the path.
# Bad: Step-oriented (cron job thinking)
Review each PR by:
1. Run git diff on the PR
2. Check for lint errors
3. Post a comment with format X
# Good: Goal-oriented (Routine thinking)
Review all open PRs in this repo.
For each PR, assess code quality and potential issues.
Post a concise review comment. Do NOT approve or merge. Keep comments under 150 words.
The first approach makes Claude dependent on fixed steps — if any step fails, it gets stuck. The second lets Claude judge the path autonomously, making it far more resilient to the unexpected.
Three-Tier Scheduling Decision Framework: Do You Actually Need Cloud Routines?
Anthropic's official docs offer three scheduling options, but media coverage focuses almost exclusively on Cloud Routines, leading many to think it's "the only option." In reality, 90% of indie makers can start with something simpler.
| Option | Execution Environment | Local Access | Laptop Must Be On | Best For |
|---|---|---|---|---|
| Cloud Routines | Anthropic cloud | No | No | Tasks that must run while laptop is off |
| Desktop scheduled tasks | Local machine | Yes | Yes | Tasks needing local files or databases |
| /loop | Current session | Yes | Yes | Session-based or experimental tasks |
Decision tree:
-
Does this task need to run while I'm asleep with my laptop closed?
- Yes → Cloud Routines
- No → Continue
-
Does this task need access to local files or environment?
- Yes → Desktop scheduled tasks
- No → /loop is sufficient
Persona scenarios:
- Leo (SaaS founder) needs his morning PR digest ready before he wakes up → Cloud Routines
- Leo's Sentry alert analysis needs to read local
.env.localto connect to a private Sentry endpoint → Desktop scheduled tasks - Mei (tech YouTuber) wants to test a new weekly cleanup workflow: run it once with /loop first, confirm the logic works, then upgrade to Cloud Routines → Start with /loop
Important: Every Cloud Routine execution is a fresh clone — a clean environment on Anthropic's cloud that cannot read your local
.env.localor local databases. Tasks requiring local state must use Desktop scheduled tasks.
Prerequisites and Your First Routine: From Zero to First Execution
The official marketing says Cloud Routines are "zero maintenance, zero DevOps," and that's true — but there's a one-time setup cost (about 15-30 minutes). Understanding this "invest once" nature is more useful than being misled by the "zero ops" framing.
Prerequisites Checklist
- Claude Code on the web enabled: Go to claude.ai/code and confirm you have web access to Claude Code (Pro plan or above)
- GitHub repository connected: In Claude Code on the web settings, authorize Anthropic to read your GitHub repository
- Environment Variables configured: If your Routine needs to connect to external APIs (Slack, Linear, Sentry), enter the corresponding API keys in the Routine's Environment Variables section — this is the only way Cloud Routines can "remember" information across executions
Creating Your First Routine (Morning PR Digest Example)
Step 1: Go to claude.ai/code/routines → click New Routine
Step 2: Choose trigger type
- Schedule: Set a cron schedule (e.g., daily 08:00 Taiwan time →
0 0 * * *UTC) - Webhook: Triggered by GitHub PR/issue events
- API: Triggered by external system calls
Step 3: Select repository and configure the permissions the Routine needs (read access, write PR comments, etc.)
Step 4: Write the Routine prompt (see templates below)
Step 5: Save and test (click Run now to trigger a manual execution and verify the output matches expectations)
The first time you see execution logs appear for a Routine you didn't manually trigger, it's a peculiar feeling — that's a real Claude Code session running, just without you being present.
/schedule CLI vs Web UI: Real Differences Between the Two Setup Methods
Routines support two creation methods with identical functionality but different experiences:
Web UI (claude.ai/code/routines): Best for users like Mei who aren't comfortable with CLI, or scenarios where you want visual confirmation of settings. Point-and-click interface, intuitive and easy.
/schedule CLI: Best for developers already in a Claude Code terminal workflow, or freelancers like Chris who need to batch-manage Routines across multiple client repos.
# The following is illustrative syntax — verify actual flag names against official docs
# Create a PR digest Routine that runs daily at 08:00
/schedule create \
--name "morning-pr-digest" \
--cron "0 0 * * *" \
--repo "your-org/your-repo" \
--prompt "Review all open PRs opened in the last 24 hours..."
# List all existing Routines
/schedule list
# Manually trigger once (for testing)
/schedule run morning-pr-digest
Note: The CLI
/scheduledocumentation currently emphasizes the Web UI creation flow. The flag syntax above is illustrative. Before creating, trigger/schedulein-session for up-to-date CLI documentation, or use the Web UI (claude.ai/code/routines) to confirm current syntax.
Compared to manual setup, the CLI advantage is scriptable batch operations. Chris's scenario: 5 client repos, each needing a nightly PR review Routine — CLI lets him script this in one go instead of clicking through the Web UI 5 times.
For cron expression syntax, you can reference the scheduling design in OpenAI Codex CLI — while it's a different tool, cron syntax is universal, and you can also see how two AI coding agents differ philosophically in their scheduling design.
Three Essential Indie Maker Routine Templates (with Full Prompt Examples)
The core principle of Routine prompt design: give a goal + define an output format + set "do not" boundaries. Missing any one of these and Claude may be overly aggressive (pushing to main for you) or overly conservative (producing lengthy analysis with no action).
I've tested these prompt patterns in Shareuhack's agent fleet — our reviewer agent (Eno) automatically reviews content drafts daily, using logic similar to the PR review template below. VentureBeat's enterprise testing of Routines also showed that well-structured Routine prompts reduced PR review cycles from an average of 2.3 rounds to 1.4 rounds (note: enterprise-scale repos; indie maker scenarios may differ significantly).
Template 1: Morning PR Digest (Schedule trigger, daily at 08:00)
Trigger type: Schedule
Cron: 0 0 * * * (UTC, corresponding to 08:00 Taiwan time)
Repository: {your-repo}
Prompt:
Review all open pull requests in this repository that were updated in the last 24 hours.
For each PR, provide:
- PR title and number
- One-line summary of what it does
- Key concerns or blockers (if any)
- Suggested action: Ready to merge / Needs revision / Needs more info
Format the output as a markdown summary. Post it as a new comment on each relevant PR.
Do NOT:
- Approve or merge any PR
- Leave more than one comment per PR
- Comment on PRs older than 24 hours
Template 2: Weekly Documentation Scan (Schedule trigger, weekly)
Trigger type: Schedule
Cron: 0 1 * * 1 (every Monday UTC 01:00, corresponding to Monday 09:00 Taiwan time)
Repository: {your-repo}
Prompt:
Scan the /docs directory for documentation that may be outdated.
Check for:
- References to deprecated APIs or features
- Version numbers that don't match the current package.json
- Broken internal links
- TODO or FIXME comments older than 30 days
Create a GitHub Issue titled "Weekly Docs Audit - {date}" with a checklist of findings.
If no issues found, create a brief issue noting the audit was clean.
Do NOT:
- Edit any files directly
- Delete any content
- Create more than one issue per run
Template 3: PR Open Webhook Trigger (auto-review on every PR)
Trigger type: Webhook (GitHub PR opened/synchronize event)
Repository: {your-repo}
Prompt:
A pull request has been opened or updated. Review it for:
1. Code quality: Are there obvious bugs, anti-patterns, or style issues?
2. Test coverage: Does the PR include tests for new functionality?
3. Documentation: Are new functions/APIs documented?
Post a constructive review comment summarizing your findings.
Use this format:
- What looks good
- Concerns to address
- Suggestions (optional)
Do NOT:
- Approve or request changes (leave that to human reviewers)
- Post more than one comment per PR update
- Comment on draft PRs
Key point: Template 3's Webhook trigger does not count toward the Pro plan's 5/day schedule cap. Each PR open is an independent trigger, ideal for high-frequency scenarios.
Connecting External Tools: MCP Connectors for Slack and Linear Notifications
Routines connect to external tools (Slack, Linear, Google Drive, etc.) through MCP connectors, allowing Routine output to reach your work environment instead of staying only on GitHub.
But there's an important mental model to establish first: Routines are the "AI judgment layer," not an "integration platform."
- Routines excel at: Understanding context, making judgments, generating summaries, analyzing document quality
- Routines are not great at: Mechanical data transfers (copying data from service A to service B)
- Optimal combination: Routines for judgment, Zapier/Make for data transfer
Forcing Routines to handle simple data forwarding (e.g., "sync new GitHub issues to Notion") wastes tokens and is less reliable than Zapier's deterministic workflows.
If you haven't evaluated which MCP servers best fit an indie maker workflow, check the comprehensive MCP servers guide for selection guidance.
Setting Up a Slack Connector (Notification Example)
- In Claude Code on the web settings → Connectors → Add Slack
- Authorize Anthropic to read/write to your specified Slack workspace and channel
- Add Slack output instructions to your Routine prompt
# Add to the end of your Routine prompt:
After completing the review, send a summary to Slack channel #dev-digest using this format:
"Morning PR Digest ({date}): {X} PRs reviewed, {Y} need attention"
Include links to PRs that need revision.
Practical tip: MCP connector setup itself is relatively straightforward, but the Slack output formatting in your Routine prompt needs testing. After the first execution, check whether the Slack notification format meets expectations before deciding if the prompt needs adjustment.
Making the Most of Pro's 5/Day: Schedule Allocation Strategy
Pro plan's 5 daily schedule triggers looks limited, but once you understand the billing mechanics, 5 is actually enough for most indie makers.
The critical distinction: Schedule trigger vs Webhook/API trigger
- Schedule trigger: Counts toward daily cap (Pro: 5/day, Team: 25/day, per official pricing page)
- Webhook/API trigger: Event-driven, does not count toward daily cap (but confirm with latest official docs, as billing mechanics may change)
Recommended Pro plan allocation strategy:
| Routine | Trigger Type | Counts Toward Cap? | Frequency |
|---|---|---|---|
| Morning PR digest | Schedule | Yes | Daily (uses 1/5) |
| Weekly doc scan | Schedule | Yes | Weekly Monday (uses 1/35 per day) |
| PR open auto-review | Webhook | No | Every PR open |
| Issue creation triage | Webhook | No | Every issue created |
| Emergency alerts (Sentry) | API trigger | No | On demand |
Bottom line: Reserve Schedule triggers for "proactive scans at fixed time points." Immediate-response events (PRs, issues, alerts) should all use Webhook/API triggers — this way, the 5/day cap is nearly impossible to exhaust.
When to consider upgrading to Team: If you manage multiple client repos (Chris's scenario), each repo having one daily morning Schedule Routine, you'll hit the cap beyond 5 clients. Team plan's 25/day is a better fit for freelancers.
Edge Cases and Risk Management — When Routines Do the Unexpected
Routines are powerful, but token consumption and unexpected behavior are real risks. The Register's critical perspective is somewhat one-sided, but the core concern is valid: Claude, unsupervised, may be overly aggressive.
Leo's firsthand lesson is worth noting: he previously set up a GitHub Actions bot to auto-comment on PRs, and the bot left 500-word detailed analyses for every minor typo, turning the entire PR thread into a mess until his collaborator complained.
Risk management checklist:
- [ ] Explicit "do not" boundaries: Every Routine prompt must include a
Do NOT:section listing operations Claude should not perform (don't push to main, don't modify more than X files, don't leave more than one comment) - [ ] Test with /loop or Run now first: Before setting up a schedule, trigger manually once and carefully review the output. Only enable auto-scheduling after confirming the output meets expectations
- [ ] Set reviewable output formats: Require Claude's output to follow a fixed format (markdown checklist, bullet points) so you can quickly scan rather than re-read everything
- [ ] Local state → Desktop tasks: Any task needing local
.env, local databases, or local tools should not go in Cloud Routines. Use Desktop scheduled tasks instead - [ ] Monitor token consumption: Complex Routine prompts (asking Claude to scan large repos) can consume substantial tokens. Monitor Claude Code usage closely in the first week to ensure consumption stays within expectations
- [ ] Confirm failure notification mechanism: How will you know when a Routine fails? Execution history is currently viewable at claude.ai/code/routines. For proactive notifications, you need to explicitly add Slack failure notification instructions in the Routine prompt (e.g., "if you encounter an error, send a failure summary to #dev-alerts") — this is not automatic behavior
Common pitfall: A frequent mistake is making the Routine prompt too complex ("scan all files, analyze all PRs, update all docs, notify everyone"), causing each execution to consume massive tokens with declining output quality. The best Routines do one thing well.
This principle applies to all automation tools — the biggest trap of automation is "cramming too many things into one workflow," making it harder to maintain than doing things manually.
Conclusion
The greatest value of Claude Code Routines isn't "letting you automate more things" — it's making repetitive work that requires AI judgment no longer require your presence. Those tasks that fall between "too complex for Zapier" and "too trivial to do yourself" — Routines fill exactly that gap.
Start here:
- First, confirm your Claude Code CLAUDE.md and Skills setup is in place — Routine prompts share the same behavioral foundation as your everyday Claude Code configuration
- Create your first minimal Routine: a morning PR digest (if you have a GitHub repo), or a weekly doc scan
- Use Run now to manually trigger and review output quality
- Once output meets expectations, enable auto-scheduling and observe for one week
- If you're still satisfied after a week, then consider expanding to a second Routine
Don't start by creating 10 Routines. One Routine that works well is worth more than 10 you need to constantly fix.
FAQ
What subscription plan do I need for Claude Code Routines? Is Pro's 5/day enough?
The Pro plan provides 5 schedule triggers per day; Team/Enterprise plans offer 25/day (per Anthropic's official pricing page — confirm the latest limits before relying on these numbers). The key insight is that Webhook/API triggers do not count toward the daily schedule cap, so indie makers can use the Pro plan with Webhook triggers for high-frequency events (PR opens, issue creation) and reserve the 5 schedule triggers for fixed schedules (morning digest, weekly doc scan). This covers most indie maker needs.
Can Cloud Routines access my local files?
No. Every Cloud Routine execution performs a fresh clone in Anthropic's cloud environment — it cannot access your local .env.local, local databases, or other local state. Tasks requiring local file access must use Desktop scheduled tasks instead. Credentials (API keys) need to be configured in the Routine's Environment Variables settings.
Do Routines replace GitHub Actions?
No — they complement each other. GitHub Actions excels at deterministic CI/CD pipelines (build, test, deploy) with fixed workflows. Routines excel at judgment-based tasks requiring AI context understanding (PR review, doc quality scanning, Sentry error summaries). The optimal combination lets GitHub Actions handle CI/CD while Routines handle the reasoning-intensive parts.
How do I create my first Routine? What are the prerequisites?
Prerequisites: (1) Claude Code on the web enabled (claude.ai/code); (2) GitHub repository connected to Claude Code; (3) If needed, API keys configured in Environment Variables. Setup flow: Go to claude.ai/code/routines → New Routine → choose trigger type (Schedule/Webhook/API) → set repository and prompt → save. Initial setup takes about 15-30 minutes; after that, it's nearly zero maintenance.



