How to Run Autonomous AI Agents with Claude Code (2026 Guide)
Most people use Claude Code as a fancy CLI assistant. Ask a question, get an answer, repeat. But Claude Code can do something far more powerful: run as a fully autonomous agent that works 24/7 without human intervention.
At Paxrel, we run an autonomous agent on a $5/month VPS that manages an entire business — scraping news, writing newsletters, managing APIs, monitoring services, and communicating with the team via Telegram. Here's exactly how we built it.
What Makes Claude Code Different from ChatGPT
Claude Code isn't a chat interface. It's a full-featured CLI that runs on your machine (or server) with direct access to:
- Your filesystem — read, write, and edit any file
- Bash commands — install packages, run scripts, manage processes
- Persistent memory — CLAUDE.md files that survive across sessions
- Tool use — structured access to grep, glob, web search, and custom tools
This means Claude Code can do things that chat-based AI simply cannot: manage cron jobs, deploy code, interact with APIs, and maintain state across conversations.
Step 1: Set Up Your Server
You need a Linux server that runs 24/7. A $5/month VPS from Hetzner, DigitalOcean, or Contabo works perfectly.
# Install Claude Code
npm install -g @anthropic-ai/claude-code
# Verify it works
claude --version
# Create your project directory
mkdir ~/my-agent && cd ~/my-agent
Claude Code needs an Anthropic API key or a Max subscription. Set it up:
# Option 1: API key
export ANTHROPIC_API_KEY=sk-ant-...
# Option 2: OAuth login (interactive, one-time)
claude auth login
Step 2: Define Your Agent's Identity with CLAUDE.md
The CLAUDE.md file is your agent's brain. It loads automatically every session and tells Claude who it is, what it should do, and how to behave.
# My Agent — Autonomous Newsletter Manager
## Mission
Curate and publish an AI newsletter 3x/week.
## Pipeline
1. Scrape RSS feeds for AI news
2. Score articles by relevance (use DeepSeek API)
3. Write newsletter draft
4. Publish via Buttondown API
5. Post teaser on Twitter
## Credentials
All API keys are in `credentials.env`
## Rules
- Never spend more than $5/day on API calls
- Always log actions to daily notes
- If blocked, message the team via Telegram
This isn't just documentation — it's executable context. Every time Claude Code starts, it reads this file and knows exactly what to do.
Step 3: Build Your Tool Scripts
Your agent needs tools to interact with the world. Write them as simple Python or Node.js scripts:
# scraper.py — Fetch articles from RSS feeds
import feedparser
FEEDS = [
"https://hnrss.org/newest?q=AI+agent&points=10",
"https://www.reddit.com/r/artificial/.rss",
"https://blog.anthropic.com/rss",
]
def scrape_all():
articles = []
for url in FEEDS:
feed = feedparser.parse(url)
for entry in feed.entries[:10]:
articles.append({
"title": entry.title,
"url": entry.link,
"source": feed.feed.get("title", url),
"published": entry.get("published", ""),
})
return articles
if __name__ == "__main__":
import json
results = scrape_all()
print(json.dumps(results, indent=2))
print(f"\n{len(results)} articles scraped")
Your agent can run these scripts via Bash, read the output, and make decisions based on the results.
Step 4: Add Persistent Memory
Agents need to remember what they've done. Use daily notes:
# The agent creates and updates these automatically
life/
├── daily/
│ ├── 2026-03-24.md # Today's work log
│ ├── 2026-03-23.md # Yesterday
│ └── ...
├── projects/
│ └── newsletter/ # Project-specific state
└── resources/
└── credentials.env # API keys (gitignored)
Each session, the agent reads its last daily note, picks up where it left off, and continues working. No context is lost between sessions.
Step 5: Schedule with Cron + Heartbeats
The key to autonomy is scheduled execution. You don't keep Claude Code running — you invoke it on a schedule:
# Crontab example
# Run agent every 2 hours during business hours
0 8,10,12,14,16,18 * * * cd ~/my-agent && claude -p "Read your daily note and work on the next priority task" --allowedTools "Bash,Read,Write,Edit"
# Newsletter pipeline: Mon/Wed/Fri at 8am UTC
0 8 * * 1,3,5 cd ~/my-agent && claude -p "Run the newsletter pipeline end-to-end" --allowedTools "Bash,Read,Write,Edit"
# Health check every 15 minutes
*/15 * * * * curl -s https://mysite.com > /dev/null || echo "Site down" | telegram-send
Tools like ClaudeClaw automate this further with heartbeat signals, Telegram integration, and session management.
Step 6: Add Communication Channels
An autonomous agent needs to report back. Telegram is perfect for this:
# notify.py — Send status updates
import requests
BOT_TOKEN = "your-telegram-bot-token"
CHAT_ID = "your-chat-id"
def send_message(text):
requests.post(
f"https://api.telegram.org/bot{BOT_TOKEN}/sendMessage",
json={"chat_id": CHAT_ID, "text": text}
)
# Usage in your agent's workflow:
send_message("Newsletter #3 published. 88 articles scraped, top story: GPT-5.4 solves frontier math.")
Your agent should message you at key moments: task completion, errors, milestones, and daily summaries.
Real-World Architecture
Here's the actual architecture we use at Paxrel to run a business autonomously:
| Component | Tool | Cost |
|---|---|---|
| Server | Contabo VPS (4 vCPU, 8GB RAM) | $5/mo |
| AI Brain | Claude Code (Max subscription) | Included |
| Scoring LLM | DeepSeek V3.2 API | ~$3/mo |
| Newsletter | Buttondown (free tier) | $0 |
| Website | Cloudflare Tunnel + static | $0 |
| Communication | Telegram Bot API | $0 |
| Domain | Cloudflare Registrar | $10/yr |
| Total | ~$9/mo |
Lessons from Running Agents in Production
1. Always Log Everything
Your agent should write daily notes documenting what it did, what worked, and what failed. Without logs, you're flying blind.
2. Set Hard Spending Limits
AI agents with API access can spend money. Set daily caps (e.g., $20/day max) and alert thresholds. Our agent checks DeepSeek balance every session and alerts if it drops below $5.
3. Use Fallback Models
If your primary LLM is down or expensive, have a fallback. We use DeepSeek V3 for scoring (cheap) and Claude for writing (quality). If DeepSeek is down, the pipeline still runs with reduced scoring.
4. Design for Failure
APIs will fail. RSS feeds will timeout. Rate limits will hit. Your agent should handle all of this gracefully — retry with backoff, skip failed sources, and report issues without crashing.
5. Keep Humans in the Loop (Where It Matters)
Full autonomy doesn't mean zero oversight. Our agent runs the pipeline autonomously but sends a Telegram notification before publishing, so we can review if needed. For social media posts, the agent drafts content and sends it to the human to post — no automated posting to public channels.
Get the Full AI Agent Playbook
80+ pages covering architecture, security, memory systems, deployment, and more.
Get the Playbook — $29Common Mistakes to Avoid
- Over-engineering the first version. Start with a simple script, not a framework. You can always add complexity later.
- Ignoring security. Never commit API keys to git. Use environment variables and restrict file access.
- No rate limiting. Your agent can make thousands of API calls per minute if you let it. Always add delays and caps.
- Forgetting timezone handling. Cron uses UTC. Your users are in local time. Always be explicit about which timezone you mean.
- Not testing the pipeline end-to-end. Test the full flow before scheduling. A broken cron job at 3am is no fun to debug.
What's Next
Autonomous AI agents are the next evolution of software. Instead of building apps that wait for user input, you build agents that proactively do work. The tooling is here — Claude Code, MCP, heartbeat systems — and the cost is under $10/month.
The question isn't whether AI agents will run businesses. It's whether you'll be the one building them.
AI Agents Weekly Newsletter
Get the latest on autonomous agents, frameworks, and real-world deployments. 3x/week, free.
Subscribe Free