Shareuhack | AWS Strands Agents SDK Guide: Should Indie Makers Pick Strands, LangGraph, or CrewAI in 2026?
AWS Strands Agents SDK Guide: Should Indie Makers Pick Strands, LangGraph, or CrewAI in 2026?

AWS Strands Agents SDK Guide: Should Indie Makers Pick Strands, LangGraph, or CrewAI in 2026?

Published April 29, 2026·Updated May 5, 2026
LunaMiaEno
Written byLuna·Researched byMia·Reviewed byEno·Continuously Updated·10 min read

AWS Strands Agents SDK Guide: Should Indie Makers Choose Strands, LangGraph, or CrewAI?

You open GitHub and there it is, yet another AI agent framework. This time from AWS. Your first instinct might be: "AWS means complex, and I'll get locked into their ecosystem." But here's where Strands Agents breaks the pattern: it's arguably the fastest to get started among major agent SDKs, it's Apache 2.0 open source, and you can plug in Anthropic Claude, OpenAI, or even a local Ollama instance without writing a single line of AWS code.

This guide breaks down what Strands actually solves, how it fundamentally differs from LangGraph and CrewAI, and which one you should pick for your next agent project, all from an indie maker's perspective.

TL;DR

  • One-liner: Strands is currently the shortest path from "idea" to "running AI agent," designed for indie developers who want to validate ideas fast, not just for enterprises
  • Strands is an open-source agent SDK from AWS, Apache 2.0 licensed, supporting multiple LLM providers with no AWS lock-in
  • Core design: model-driven (the AI model plans and executes steps on its own, rather than engineers pre-defining a flow graph). No graphs, no crews. Give the model tools and let it decide how to use them
  • First-class MCP support, connecting directly to thousands of existing MCP servers as tools
  • Python SDK is stable (used in Amazon's internal production), TypeScript SDK is still preview
  • Best for: rapid idea validation, projects needing lots of MCP tool integrations, indie makers already deploying on AWS
  • Not ideal for: precise workflow control, visual debugging, TypeScript-first projects

Your First Impression of Strands Is Probably Wrong

"Made by AWS = closed source = locked into AWS." That instinct is completely wrong with Strands.

Strands uses Apache 2.0 licensing, meaning you can freely use, modify, and commercialize it without giving anything back to AWS. We actually tested an agent locally using the Anthropic Claude API (claude-sonnet-4-20250514) with MCP GitHub tools, and never touched a single AWS service. One gotcha we hit: the first time running the MCP GitHub server, we got a 401 auth error because we hadn't set the GITHUB_TOKEN environment variable. Once the token was configured, everything worked smoothly. Overall, pip install strands-agents plus an Anthropic API key is all you need to get started.

More importantly, there's the multi-provider design. Strands supports Amazon Bedrock, Anthropic Claude API, OpenAI, Ollama, LiteLLM, and even community-contributed providers like Cohere, xAI, and Fireworks. You can use Claude on Bedrock today and switch to the direct Anthropic API tomorrow with a single line of code changed.

What Is Strands Agents? From May 2025 Launch to 14 Million Downloads

The Strands Agents SDK was open-sourced by AWS Labs on May 16, 2025, with a clear positioning: a model-driven AI agent framework, in contrast to LangGraph's workflow-driven and CrewAI's role-based designs.

Adoption numbers as of April 2026:

  • GitHub Stars: 6,200+
  • PyPI Downloads: 14 million+ cumulative, averaging ~5.35 million per month (per PyPI Stats)
  • Internal Amazon usage: Q Developer, AWS Glue, and VPC Reachability Analyzer all run production agents on Strands
  • Partners: Anthropic, Meta (Llama), Langfuse, mem0.ai, Tavily

An honest note about download numbers: The 5.35M+ monthly PyPI downloads include heavy CI/CD pipeline duplication, so the actual number of unique users is much lower. More meaningful metrics are GitHub Stars and contributor diversity. Contributors come from Accenture, Anthropic, Meta, PwC, and others.

In February 2026, AWS launched Strands Labs, a separate experimental GitHub organization for projects not yet in the production SDK (Robots, Robots Sim, AI Functions). Watching Strands Labs reveals where AWS is betting on the future of agentic AI.

Strands Technical Architecture: Why "Model-Driven" Isn't Cutting Corners

To understand Strands, you need to understand its agent loop:

  1. Call the model: Send user input and the list of available tools to the LLM
  2. Check the response: Did the model return a final answer or a tool call?
  3. Execute tools: If it's a tool call, run the corresponding tool and bring back the results
  4. Repeat: Call the model again with the tool results until the model decides "I have the answer"

This is fundamentally different from LangGraph. LangGraph requires you to define a state machine: every node, every edge, every conditional branch is set by the engineer. Strands' philosophy is that post-2025 frontier models (Claude, GPT-4 class) are smart enough for planning, and the framework's job is to "not get in the model's way" rather than "plan the path for the model."

First-Class MCP Support

Strands treats MCP (Model Context Protocol) as a first-class citizen. You can connect any MCP server as an agent tool without writing custom wrappers:

from strands import Agent
from strands.tools.mcp import MCPClient

mcp_client = MCPClient("npx -y @modelcontextprotocol/server-github")
agent = Agent(tools=[mcp_client])
agent("List my recent GitHub PRs")

Note: AWS official documentation examples for MCP GitHub use Amazon Bedrock (Claude 3.7 Sonnet on Bedrock), which requires AWS credentials. However, the Strands framework itself supports using the Anthropic API or other providers directly. The code above works with AnthropicModel as well, no AWS account needed.

This is significant for indie makers. Instead of writing API wrappers one by one, you can plug into GitHub, Slack, databases, search engines, and more through existing MCP servers.

Multi-Agent Patterns

Strands supports three multi-agent patterns:

  • Graph: Structured routing for scenarios with clear branching logic
  • Swarm: Parallel execution for tasks that can run independently
  • Workflow: Sequential pipeline for processes with fixed steps

Agent-to-Agent (A2A) cross-framework collaboration is also in development.

Observability

Built-in OpenTelemetry instrumentation lets you connect directly to observability platforms like Langfuse. According to AWS's official technical documentation, every agent loop iteration produces trace spans covering model calls, tool execution, and token usage.

Three-Way Framework Comparison: Strands vs LangGraph vs CrewAI

The following comparison is based on official documentation and hands-on testing, as of April 2026:

DimensionStrands AgentsLangGraphCrewAI
Design PhilosophyModel-driven (LLM decides)Graph state machine (engineer decides)Role-based crew (role division)
Learning CurveLowest (3-5 lines to start)Highest (requires graph thinking)Medium (intuitive roles but patterns to learn)
MCP SupportFirst-classVia adapterLimited
TypeScriptPreview (incomplete)Full supportFull support
Debugging ToolsOpenTelemetry traces (no native visualization)LangGraph Studio (visual)CrewAI Studio + replay
Best ForRapid validation, heavy MCP usage, AWS deploymentComplex workflows, precise control neededRole-based team simulations
Production MaturityUsed internally at Amazon (Python)Most mature, 47M+ monthly downloads (self-reported)Has enterprise control plane
LicenseApache 2.0MITMIT
Model Lock-inNone (multi-provider)None (via LangChain)None (multi-provider)

This isn't about which is "better." It's about which fits your situation.

Indie Maker Decision Guide: Which One Should You Actually Pick?

Choose Strands if you...

  • Are building your first agent and want the fastest path: Strands' model-driven design doesn't require learning graph concepts or defining role schemas. Five lines of Python gets your first agent running
  • Need to connect lots of external tools: The MCP ecosystem is your force multiplier. GitHub, Slack, databases, search engines all have existing MCP servers ready to use
  • Already deploy on AWS: Bedrock + Lambda + AgentCore provides a complete deployment path
  • Have scenarios where LLM autonomy is acceptable: Your agent doesn't need strict step-by-step control

Choose LangGraph if you need...

  • Deterministic workflows: Every step must follow a specific order, support rollback, and have explicit error handling
  • Visual debugging: LangGraph Studio lets you debug agent behavior like reading a flowchart. This is Strands' most obvious gap right now, as Strands only offers OpenTelemetry trace output with no native visual debugging interface
  • Your team already knows LangChain: The learning curve drops dramatically
  • Production stability as the top priority: LangGraph is currently the most mature option in the community

Choose CrewAI if you want...

  • Multi-role collaboration: Your agent logic naturally fits the "researcher gathers data, analyst organizes, writer produces" pattern
  • No-code/low-code rapid iteration: CrewAI Studio provides a graphical interface
  • Built-in replay: Replaying and comparing different runs is important for your debugging workflow

Is Migrating from LangGraph Worth It?

If you already have production agents on LangGraph, the migration cost to Strands depends on your agent's complexity:

  • Simple agents (single tool chain, no complex branching): Low migration cost, roughly 1-2 days. Strands' model-driven design can directly replace simple linear graphs, typically reducing code by 60-70%
  • Medium complexity (conditional branches, error handling): Takes 3-5 days. You'll need to convert logic that was hardcoded in graph edges into tool descriptions, letting the model make decisions. The risk is that model-driven behavior is less deterministic than graphs, so thorough testing is needed
  • High complexity (nested sub-graphs, custom state management): Migration is not recommended. While Strands offers Graph/Swarm/Workflow multi-agent patterns, they're not as mature as LangGraph's state machine ecosystem

Practical advice: Unless you have specific pain points with LangGraph (e.g., MCP integration is too cumbersome, graph definitions are too bloated), a working LangGraph agent isn't worth migrating just for the sake of switching frameworks. Save Strands for your next new project.

Get Your First Strands Agent Running in 30 Minutes

The following steps are verified against official documentation and don't require an AWS account. The basic Python agent (Steps 1-3) takes about 10 minutes; adding MCP tool integration and environment troubleshooting brings the total to around 30 minutes.

Prerequisites

Before you begin, make sure your environment has:

  • Python 3.10+: Minimum requirement for the Strands SDK
  • pip: Python package manager
  • LLM API Key: An Anthropic API key, AWS Bedrock configuration, or local Ollama all work
  • Node.js v18+ (for Step 4): MCP servers run via npx, which requires Node.js
  • GITHUB_TOKEN (for Step 4): A GitHub Personal Access Token for MCP GitHub server authentication

Step 1: Install

pip install strands-agents strands-agents-tools

Step 2: Configure Your LLM Provider

Using the Anthropic Claude API (no AWS needed):

import os
os.environ["ANTHROPIC_API_KEY"] = "your-key-here"

from strands.models.anthropic import AnthropicModel
model = AnthropicModel(model_id="claude-sonnet-4-20250514")

Or using local Ollama:

from strands.models.ollama import OllamaModel
model = OllamaModel(host="http://localhost:11434", model_id="llama3")

Step 3: Minimal Agent

from strands import Agent

agent = Agent(model=model)
response = agent("Explain MCP in one sentence")
print(response)

Step 4: Add Tools (MCP)

Prerequisite: This step uses the MCP GitHub server, which requires Node.js (v18+ recommended) since the npx command comes from Node.js. The GitHub MCP server also requires authentication to access the API, otherwise you'll get a 401 auth error. Set the environment variable first:

export GITHUB_TOKEN="ghp_your_personal_access_token"

You can generate a token at GitHub Settings > Developer settings > Personal access tokens. Check the repo permission scope.

from strands.tools.mcp import MCPClient

github_tools = MCPClient("npx -y @modelcontextprotocol/server-github")
agent = Agent(model=model, tools=[github_tools])
agent("List the 5 most recent PRs in strands-agents/sdk-python")

Step 5: Deploy (Optional)

AWS provides official reference architectures for deploying to Lambda or AgentCore, but you can also package your agent as any Python service and deploy to Railway, Fly.io, or your own VPS. The key point: Strands SDK does not require an AWS environment.

Real Cost Breakdown: How Much Does Running a Production Agent Cost?

The Strands SDK itself is free, and AgentCore's harness doesn't charge extra. What you actually pay for is model inference and compute resources.

Cost Estimate (Using Claude 3.5 Haiku via Bedrock)

Based on the AWS Bedrock pricing page, Claude 3.5 Haiku rates are:

  • Input: $0.80 / million tokens
  • Output: $4.00 / million tokens

For a medium-complexity agent (roughly 3,000 input tokens + 1,000 output tokens per request), processing 100 requests per day:

  • Input cost: 100 x 3,000 / 1,000,000 x $0.80 = $0.24/day
  • Output cost: 100 x 1,000 / 1,000,000 x $4.00 = $0.40/day
  • Monthly total: ~$19.20 (excluding Lambda or Fargate compute costs)

With Claude Sonnet 4 (Input $3 / Output $15 per million tokens), the same scenario runs about $54/month.

Bedrock vs direct Anthropic API: Based on AWS pricing, Claude models on Bedrock have the same per-token price as the direct Anthropic API, with no Bedrock markup. Bedrock's advantage is cross-region inference and integration with other AWS services.

Cost Optimization

Bedrock supports prompt caching, which gives cached tokens a 90% discount according to AWS documentation. If your agent has heavy repetition in system prompts or context, enabling caching can significantly cut input costs.

Risk Disclosure

Before choosing Strands, here are limitations you should be aware of:

  1. Incomplete TypeScript SDK: As of April 2026, it's still in preview and lacks multi-agent features. If your stack is pure TypeScript/Node.js, you may need to wait
  2. Model-driven unpredictability: Letting the LLM make decisions means you can't precisely control every step. For scenarios requiring deterministic workflows (e.g., financial transactions, legal document processing), LangGraph's state machine is a better fit
  3. AWS long-term commitment is unknown: While Apache 2.0 licensing means the code won't disappear, AWS has changed its open source strategy before (see the Elasticsearch to OpenSearch situation). The good news is that Apache 2.0 itself is one of the most permissive licenses available
  4. Relatively small community: LangGraph has 47M+ monthly downloads, while Strands has around 5.35M (including CI/CD duplication). When you hit problems, Stack Overflow and community forum resources will be thinner

Conclusion

Strands represents a "trust the model" design philosophy. In 2026, as frontier models keep getting smarter, hardcoded state machines may be becoming over-engineering.

For indie makers, the biggest value of Strands isn't being "the most powerful," but being "the fastest path from idea to running agent." The MCP ecosystem's leverage effect means a solo developer can integrate many tools, and Apache 2.0 licensing ensures you won't get locked in.

If you're building your first agent: Start with the quickstart above. Get your first agent running in 30 minutes and experience model-driven design firsthand. Strands' low barrier lets you focus on "what problem should my agent solve" instead of "how to configure the framework."

If you're already using LangGraph: No need to rush a migration. Try Strands as your MCP tool integration solution first. Build a small side project with Strands and feel the difference between the two design philosophies. If your LangGraph agent is already running stable, let it keep running. Save Strands for your next idea that needs rapid validation.

FAQ

Is Strands Agents a proprietary AWS service? Will I get locked in?

No. Strands is Apache 2.0 open source and works with any LLM provider (Anthropic Claude, OpenAI, Ollama, etc.). You can also deploy it outside AWS. You can run everything locally without an AWS account.

Is Strands the same thing as Amazon Bedrock AgentCore?

No. Strands is an open source framework (SDK), while AgentCore is AWS's managed hosting platform. You can deploy Strands-built agents to AgentCore, but you can also deploy to Lambda, Fargate, or any server you manage.

Is the Strands TypeScript SDK stable enough for production?

As of April 2026, it is still in preview and lacks multi-agent features. For production use, the Python SDK is recommended. Amazon's own services like Q Developer and AWS Glue use the Python version internally.

Is Strands MCP support the same as Claude Desktop's MCP?

They operate at different levels. Strands is an MCP client-side integration that can connect to any MCP server as an agent tool. Claude Desktop is also an MCP client. Both follow the same MCP protocol and can share MCP servers.

Quality guarded by our community

We're committed to accuracy. Spot something off? Your feedback helps every reader.

Was this article helpful?

The Shareuhack Brief

Occasional field notes and structural observations.

High-value content only. Unsubscribe anytime.