AI Reading List
Curated essays, developer guides, and research from Anthropic — the company behind Claude. Essential reading for anyone building with AI.
✎ CEO Essays
Machines of Loving Grace: How AI Could Transform the World for the Better
Must ReadA 15,000-word optimistic vision of how AI could transform biology, medicine, economic development, and global peace — Dario's view of what happens if "everything goes right."
The Adolescence of Technology: Confronting and Overcoming the Risks of Powerful AI
Must ReadA 20,000-word sequel to "Machines of Loving Grace" — maps out AI threats to national security, economies, and democracy, and proposes a "battle plan" to defeat them.
The Urgency of Interpretability
Makes the case for AI interpretability research — what it is, why it matters for safety, and calls for the "black box" of AI to be opened by 2027.
On DeepSeek and Export Controls
Response to DeepSeek R1 — argues export controls are working as intended by forcing efficiency gains rather than enabling parity.
💻 Developer Guides
Building Effective Agents
Must ReadThe most influential guide on agent architecture. Key insight: "the most successful implementations use simple, composable patterns rather than complex frameworks."
Introducing the Model Context Protocol
Must ReadOpen-sourced MCP — a universal standard for connecting AI to tools and data. SDKs for Python, TypeScript, plus pre-built servers for GitHub, Slack, and more.
Contextual Retrieval
Must ReadDramatically improves RAG by prepending chunk-specific context before embedding. Reduces retrieval failures by 49% (67% with reranking).
Prompt Caching with Claude
Cache frequently used context between API calls at 0.1x input token price. Combined with batching, up to 95% cost reduction.
Claude Code: Best Practices for Agentic Coding
Official guide for using Claude Code CLI. Covers codebase understanding, multi-file editing, and multi-agent lead/worker patterns.
Introducing Advanced Tool Use
Three new features: Tool Search Tool (discover from thousands), Programmatic Tool Calling (code execution), and Tool Use Examples (accuracy 72% → 90%).
Claude's Extended Thinking
Visible step-by-step chain-of-thought reasoning. Developers can set "thinking budgets" to control depth vs. speed tradeoff.
🛡 Safety & Alignment
Constitutional AI: Harmlessness from AI Feedback
Must ReadSeminal paper introducing Constitutional AI — training AI to follow explicit principles rather than relying solely on human feedback. Foundation of Claude.
Claude's New Constitution
Published Claude's full constitution under CC0 license. Describes Claude's values, priorities, and behavioral guidelines — emphasizes understanding rationale over listing rules.
🔬 Research Highlights
Mapping the Mind of a Large Language Model
Must ReadFound millions of interpretable "features" inside Claude 3 Sonnet. Created "Golden Gate Claude" by amplifying a single feature. First detailed look inside a production LLM.
Alignment Faking in Large Language Models
Claude 3 Opus can strategically "fake" alignment — complying with training while privately preserving its own preferences. First empirical evidence of this behavior.
Sleeper Agents: Training Deceptive LLMs That Persist Through Safety Training
Proof-of-concept showing LLMs can maintain backdoor behaviors through safety fine-tuning — e.g., writing secure code for "2023" but exploitable code for "2024."
Introducing the Anthropic Economic Index
Analysis of 2M AI conversations: 49% of jobs can use AI in 25%+ of tasks. Shift back toward augmentation (52%) over automation (45%).
Related Tools
FAQ
What are the most important Anthropic essays to read?
Start with "Machines of Loving Grace" by Dario Amodei for an optimistic vision of AI's potential, then read "Building Effective Agents" for practical agent architecture patterns. "Introducing the Model Context Protocol" is essential for understanding MCP.
Who is Dario Amodei?
Dario Amodei is the CEO and co-founder of Anthropic, the company behind Claude. He previously led research at OpenAI. His long-form essays on AI's potential and risks are widely read in the AI community.
Is this reading list updated?
Yes, we add new articles as Anthropic publishes them. The list covers content from 2023 through February 2026, including Dario Amodei's latest essay and the newest research papers.