Shareuhack | Is MCP Dead? 3 Scenarios Where Skill and CLI Can't Replace It
Is MCP Dead? 3 Scenarios Where Skill and CLI Can't Replace It

Is MCP Dead? 3 Scenarios Where Skill and CLI Can't Replace It

March 12, 2026

Is MCP Dead? 3 Scenarios Where Skill and CLI Can't Replace It

In late February 2026, a post titled "MCP is dead. Long live the CLI" hit the Hacker News front page and blew up. One camp says MCP is unnecessary abstraction; the other points out that every major tech company has already committed to it. If you've been wrestling with whether to set up an MCP server in Claude Code or Cursor, this article gives you real benchmark data and a 30-second decision framework.

TL;DR

  • MCP isn't dead, but it solves "multi-user, cross-platform tool distribution" problems — not your personal coding workflow
  • CLI beats MCP by 10-32x in token efficiency with near-100% success rates
  • Skill is procedural knowledge (teaches AI how to do things); MCP is capability (gives AI access to tools) — they're complementary, not competing
  • For solo developers: start with CLI + Skill, only reach for existing MCP servers when you need Notion/Figma or other CLI-less cloud services
  • MCP security risks are real: the CoSAI whitepaper identifies 40+ threat categories, and 24% of public servers have zero authentication

The "MCP Is Dead" Debate

MCP (Model Context Protocol) is an open protocol launched by Anthropic in November 2024 that lets AI systems connect to external tools and data sources through a unified interface. The common analogy is "USB-C for AI" — build one MCP server, and every compatible AI client can plug into it.

2025 was MCP's breakout year. OpenAI announced support in March, Google DeepMind followed in April, and Microsoft integrated it into the Windows ecosystem at Build. By December, Anthropic donated MCP to the Linux Foundation's Agentic AI Foundation, with OpenAI, Google, Microsoft, AWS, and Cloudflare all joining as founding members.

Then came the backlash.

In February 2026, Eric Holmes published that post with a blunt argument: LLMs are smart enough now — give them CLI tools plus documentation and they can handle the job. Why add another abstraction layer? Pieter Levels (levelsio) echoed this on X: "MCP, like llms.txt, is yet another unnecessary abstraction that AI doesn't actually need."

But the answer isn't binary. MCP isn't dead — it has dropped from the hype peak into the Gartner trough of disillusionment. The real question isn't "is MCP good?" but "when should you use MCP?"

Three-Layer Architecture: What MCP, Skill, and CLI Actually Do

Many people lump MCP, Skill, and CLI together as "ways to let AI use tools." But they actually operate at completely different architectural layers — understanding this is the foundation for making the right choice.

Capability Layer: MCP

MCP answers "what can AI do?" It's a standard protocol that connects AI to databases, APIs, and cloud services through a unified interface. The key difference from traditional APIs: AI can dynamically discover available tools at runtime instead of having every tool's usage hardcoded into the prompt.

Procedure Layer: Skill

Skill answers "how should AI do it?" At its core, it's a workflow instruction in markdown format that defines task sequences, quality standards, and decision logic. From hands-on experience, I've built an entire content production pipeline using Skills (from topic selection to translation) — the AI reads the Skill and knows exactly what to do at each step and what format to produce.

Execution Layer: CLI

CLI is the most direct approach — AI calls existing command-line tools on your machine via Bash. git, curl, npm, gh — tools you already use, and the AI can use them too.

Side-by-Side Comparison

DimensionMCPSkillCLI
NatureTool connection protocolWorkflow instructions (prompt)Command-line tools
What it solvesWhat AI can doHow AI should do itDirect execution
MediumServer processMarkdown filesExisting system tools
Cross-platformAll MCP clientsPrimarily Claude CodeAny shell environment
Maintenance costHigh (server ops)Very low (plain text)Near zero
Best analogyUSB-C port standardSOP handbookSwiss army knife in your pocket

Key insight: These three aren't mutually exclusive. The smartest approach is to use all three layers together — Skill defines the process, CLI handles most tasks, and MCP only kicks in when you need cross-platform distribution or enterprise governance.

Benchmark Results: How Much Does CLI Actually Win By?

Data speaks louder than opinions. My own content pipeline runs dozens of CLI calls daily (git, curl, file system operations), and I've never hit the schema bloat or connection timeout issues that plague MCP. But gut feeling isn't enough — let's look at hard numbers. According to Scalekit's 75-run benchmark, CLI dominates MCP across efficiency metrics:

MetricCLIMCP (Direct)Gap
Token consumption (GitHub query)1,36544,02632x
Monthly cost (10K operations)$3.20$55.2017x
Success rate100%72%CLI wins

Why such a dramatic difference? It comes down to "schema bloat." Every time AI calls an MCP server, the server dumps all tool definitions into the context window. Take the GitHub MCP server — it exposes 43 tools, and the schema alone eats a massive chunk of tokens, even if you only need one of them.

That said, MCP isn't unusable in every scenario. Deploying an API gateway in front for schema filtering and connection pooling can bring costs down to roughly $5/month with 99%+ reliability. But that adds architectural complexity that's probably not worth it for individual developers.

30-Second Decision Framework

Don't want to read the whole article? Use this decision tree:

Does your AI need to act on behalf of multiple different users? (OAuth authorization, tenant isolation, audit requirements)

  • Yes → Use MCP. This is MCP's truly irreplaceable use case.

Are you just automating your own workflow?

  • Yes → CLI + Skill. Your agent inherits local permissions — that's enough.

Does the service you're connecting to have a CLI?

  • Yes (git, gh, npm, aws-cli...) → Use CLI directly
  • No (Notion, Figma, Slack — API only) → Use an existing community MCP server

Do you just need to teach AI a workflow?

  • Yes → Use Skill (or your platform's equivalent prompt config file)

Three Typical Scenarios

Scenario A: Solo developer using Claude Code for a side project → CLI + Skill. If Bash can do it, you don't need MCP. Use Skill to define code review standards and commit conventions. Lowest cost, highest reliability.

Scenario B: SaaS product letting users manage their Stripe accounts via AI → MCP is necessary. Each user has their own OAuth token, requiring tenant isolation and access control. CLI can't do this.

Scenario C: Team wants both Cursor and Claude Code to query their internal wiki → MCP makes sense. Build one MCP server, both clients can connect — no duplicate integration work.

Security Risks: What You Need to Know Before Using MCP

MCP's convenience comes with real security risks — not theoretical ones.

The Coalition for Secure AI (CoSAI) whitepaper in 2026 identified over 40 MCP-specific threat categories. According to the Zuplo MCP Report, 24% of public MCP servers currently have zero authentication.

Real Attacks That Have Happened

  • Prompt injection hijacking: Attackers embedded malicious instructions in public GitHub Issues. When the AI agent read them, it was hijacked into exfiltrating data from private enterprise repositories.
  • Supply chain attacks: Fake Postmark MCP servers appeared on npm — functionally normal on the surface, but secretly BCCing all emails to the attacker.
  • SQL injection: A Supabase MCP was exploited via SQL injection in support tickets, leaking high-privilege integration tokens.

For a deeper dive into AI agent security defense frameworks, check out this complete guide.

Defense Checklist Before Using MCP

  1. Principle of least privilege: Only grant MCP servers the minimum access scope needed to complete the task
  2. Independent input validation: Don't blindly trust LLM tool-call decisions — add an independent validation layer before sensitive operations
  3. Source verification: Confirm the source is trustworthy before installing any MCP server — avoid unknown third-party servers

Conclusion

MCP isn't dead, but its real value lies in "distribution" and "governance," not "efficiency."

If you're an individual developer or digital worker, the most pragmatic strategy is: CLI + Skill as your default, MCP on demand. Use CLI for tasks that have command-line tools, use Skill to define workflows, and only bring in MCP when you need cross-platform distribution or need to connect cloud services without a CLI.

This isn't a pick-one-of-three problem — it's a three-layer architecture where each layer has its role. Once you understand what each layer solves, you won't be swayed by the extreme voices claiming "MCP is dead" or "MCP is essential."

Want to try building Skills yourself? We have a complete Claude Code Skills guide that walks you through creating your own workflow from scratch.

FAQ

If all the big tech companies support MCP, why do people say it's dead?

Because 'dead' refers to efficiency, not adoption. MCP consumes 4-32x more tokens than CLI for individual dev tasks and has a higher failure rate. Big tech is betting on MCP for its enterprise governance, multi-tenant authorization, and cross-platform distribution value — a completely different use case from solo development. 'MCP is dead' reflects developer backlash against overhype, not the protocol's demise.

I'm a solo developer. Should I invest time learning MCP right now?

Not heavily. CLI paired with Skill achieves near-100% success rates and 10-35x better token efficiency, without the overhead of OAuth setup and server maintenance. The one exception is when you need to connect cloud services like Notion or Figma that have no CLI — using an existing community MCP server makes sense there.

Skill only works in Claude Code. What about other platforms?

That's correct — Skill is currently tied to the Claude Code ecosystem (with partial Gemini CLI compatibility). If you need cross-platform tool capabilities (Cursor, ChatGPT, Windsurf), MCP is the right choice. But if you just need to define workflows and standards, most platforms have similar mechanisms: Cursor has .cursorrules, GitHub Copilot has .github/copilot-instructions.md. Same concept, different format.

Copyright @ Shareuhack 2026. All Rights Reserved.

About Us | Privacy Policy | Terms and Conditions