Shareuhack | OpenAI Codex CLI Complete Guide: Terminal AI Coding Agent Review & Claude Code Workflow Split
OpenAI Codex CLI Complete Guide: Terminal AI Coding Agent Review & Claude Code Workflow Split

OpenAI Codex CLI Complete Guide: Terminal AI Coding Agent Review & Claude Code Workflow Split

April 19, 2026
LunaMiaEno
Written byLuna·Researched byMia·Reviewed byEno·Continuously Updated·10 min read

OpenAI Codex CLI Complete Guide: Not the Old API Revived — It's an AI Coding Agent in Your Terminal

If you hear "OpenAI Codex" and immediately think of the code-completion API that shut down in 2021, you're not alone — but you're missing something entirely different. The Codex CLI launched in 2025 is an open-source terminal coding agent built from scratch in Rust, and as of April 2026 it has amassed over 75K GitHub stars, 14.5 million monthly npm downloads, and 3 million weekly active users.

This article will help you understand what Codex CLI actually is, how to install it, how it differs from Claude Code, and whether you should add it to your toolkit.

TL;DR

  • Codex CLI has nothing to do with the 2021 Codex API — it's a brand-new Rust open-source terminal agent from 2025
  • The tool is free (Apache-2.0), but AI capabilities require an OpenAI account — ChatGPT Plus subscribers can use it at no extra API cost
  • The Rust architecture delivers ~80MB memory usage and 240+ tokens/sec processing (DataCamp test environment), though Claude Code still wins in blind code-quality tests
  • Codex CLI and Claude Code are complementary: Codex CLI excels at batch refactoring and CI environments; Claude Code excels at complex architectural reasoning
  • Native MCP support means you can reuse MCP servers you've already configured in Claude Code almost directly

You Thought Codex Was Dead — But the Codex You Remember Was Never This

The confusion is perfectly understandable. OpenAI launched the GPT-3-based Codex API in 2021, primarily for code completion (early GitHub Copilot was powered by it), then officially shut it down in March 2023. Up to this point, "Codex is dead" was correct.

But in April 2025, OpenAI released something completely different under the same brand name: Codex CLI — a terminal coding agent written from scratch in Rust.

The difference isn't a version upgrade; these are entirely different product categories:

Old Codex API (2021-2023)New Codex CLI (2025-)
NatureCloud code-completion APILocal terminal coding agent
ArchitectureFine-tuned GPT-3 modelNative Rust app + codex-mini model
UsageAPI callsDirect terminal interaction
LicenseClosed-source commercial APIApache-2.0 open source
LanguagePython service95.6% Rust
StatusShut downActively developed (700+ releases)

In our experience, a large number of people see "Codex" and assume it's the old tool — especially in non-English communities. If you were one of them, now's the time for a fresh look.

The Rise of Terminal Coding Agents — Why Codex CLI's Scale Deserves Serious Attention

Terminal-native agents rapidly became part of mainstream workflows starting in 2025 — they can be embedded in CI/CD pipelines, batch-process large numbers of files, and run in SSH remote environments, all of which are structural limitations of IDE plugins. Codex CLI's numbers show this is no niche tool: npm monthly downloads surged from 82K in April 2025 to 14.53 million by March 2026 (company-reported), with over 3 million weekly active users (Sam Altman's April 2026 public statement, company-reported data).

Installation & Setup — ChatGPT Account, API Key, or Both?

Let's address the most common question first: Codex CLI is open source, but that doesn't mean the AI features are free.

The tool itself is Apache-2.0 open source — you can freely install, modify, and even fork it. But the AI inference behind it requires OpenAI's models, so you need an account. The good news: ChatGPT Plus subscribers have a path that costs nothing extra.

Installation

# npm (recommended)
npm install -g @openai/codex

# Or Homebrew (macOS)
brew install --cask codex

Choosing Your Auth Method

Option 1: ChatGPT Account Auth (recommended for Plus/Pro users)

codex auth
# Your browser opens the OpenAI login page; setup completes automatically after sign-in

This approach doesn't require managing API keys and incurs no extra API charges — usage counts toward your ChatGPT subscription plan.

Option 2: API Key Auth

export OPENAI_API_KEY="sk-..."

Or configure in ~/.codex/config.toml:

preferred_auth_method = "apikey"

API key mode charges per token. codex-mini-latest rates (as of April 2026):

  • Input: $1.50 / 1M tokens
  • Output: $6.00 / 1M tokens
  • Prompt caching discount: 75%

Free Credits

Plus users receive $5 and Pro users receive $50 in free API credits (valid for 30 days, as of April 2026), automatically issued upon login.

From our hands-on testing: if you're already a ChatGPT Plus subscriber, codex auth is the most hassle-free way to get started — setup takes 30 seconds with no API key or billing to worry about.

Your First Coding Task — From Hello World to Real-World Scenarios

Once installed, just give Codex CLI a task right in your terminal:

codex "Write a Python script that parses CSV and outputs JSON"

Codex CLI will analyze your request, generate the code, and ask whether you want to execute it. The key concept here is approval mode — you decide how much autonomy Codex CLI gets.

Three Approval Modes

ModeBehaviorBest For
suggest (default)All actions require your confirmationFirst-time use, learning tool behavior
auto-editAuto-edits files, but commands need confirmationDaily development, trust the code but want control over system operations
full-autoFully autonomous executionCI/CD environments, batch tasks

Switching modes:

# Specify at launch
codex --approval-mode full-auto "Refactor all test files to use vitest"

# Or set a default in config.toml

Important: full-auto doesn't mean "unsafe." In this mode, Codex CLI enables kernel-level sandboxing — macOS uses the Seatbelt framework, Linux uses bubblewrap, and Windows uses native sandboxing under PowerShell. According to the official Sandbox documentation, the sandbox blocks unnecessary network access by default, protecting your work environment from unintended external calls.

Five Practical Scenarios

  1. Script generation: codex "Write a bash script that monitors disk usage and sends Slack notifications"
  2. Bug fixing: codex "This test is failing — find the cause and fix it"
  3. Test writing: codex "Generate unit tests for all functions in src/utils/"
  4. Code refactoring: codex "Convert all var declarations to const/let"
  5. Documentation generation: codex "Generate API docs for this project"

Performance Gains from the Rust Architecture — Not Just Numbers, but How It Feels

Codex CLI was rewritten from TypeScript to Rust in late 2025. This wasn't a "jumping on the Rust bandwagon" decision — it had clear performance objectives. According to the official GitHub Discussion #1174, the core motivations for the rewrite were startup speed and memory efficiency.

Concrete numbers (cross-verified between DataCamp reviews and third-party benchmarks):

  • Memory usage: ~80MB (Claude Code can reach several GB when processing large projects)
  • Token processing speed: 240+ tokens/sec (DataCamp test environment)
  • Terminal-Bench 2.0 score: 77.3% (vs Claude Code 65.4%) (DataCamp review)

But there's an important caveat to be clear about: Terminal-Bench targets terminal-native tasks (scripting, system administration, DevOps) and doesn't represent overall code quality. In blind tests (developers didn't know which tool generated the code), Claude Code was rated higher quality in 67% of comparisons, versus 25% for Codex CLI.

So the performance advantage is real, but context matters:

  • Performance-sensitive scenarios: Low-memory VPS, CI environments, long-running batch operations -> Codex CLI has a structural advantage
  • Quality-sensitive scenarios: Complex development requiring precise architectural decisions -> Claude Code's reasoning capabilities are better suited

Codex CLI vs Claude Code — Not an Either/Or, but Workflow Splitting

This is the most-searched question, but "which one is better" is the wrong framing. We use both in our daily work — the key is matching different tools to different tasks.

Workflow Splitting Matrix

ScenarioRecommended ToolRationale
Batch refactoring / script generationCodex CLILower token cost, faster processing, suited for batch tasks
Complex architectural decisions, multi-file comprehensionClaude Code67% win rate in blind code-quality tests, more precise reasoning
CI/CD pipeline integrationCodex CLI80MB memory, kernel-level sandbox, natively suited for unattended operation
Precise debugging, error analysisClaude CodeStronger multi-step reasoning
Vendor lock-in sensitivityCodex CLIApache-2.0 open source, forkable and self-hostable
Frontend UI developmentClaude CodeDeeper understanding of React/Vue and similar frameworks
Mass file renaming / format standardizationCodex CLIHigh batch-operation efficiency, low cost

Fundamental Security Differences

The two tools take different security philosophies:

  • Codex CLI: Kernel-level sandboxing (macOS Seatbelt, Linux bubblewrap) — isolation at the OS level
  • Claude Code: Application-layer hooks — control at the application level

This means that in enterprise environments with strict security requirements, Codex CLI's sandbox mechanism provides deeper protection.

Practical Advice

If you're already using Claude Code, there's no need to switch — add Codex CLI as a second tool. Route batch tasks to Codex CLI, keep precision work on Claude Code. Both support MCP, so your toolchain can be shared.

MCP Integration in Practice — Connecting Codex CLI to Your Existing Toolchain

Codex CLI natively supports MCP (Model Context Protocol), meaning you can connect it to external tools — databases, file systems, documentation search, even other AI services.

Configuration

Add MCP server settings in ~/.codex/config.toml:

[mcp_servers.filesystem]
command = "npx"
args = ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/dir"]

[mcp_servers.context7]
command = "npx"
args = ["-y", "@upstash/context7-mcp"]

Or manage via CLI commands:

codex mcp add filesystem npx -y @modelcontextprotocol/server-filesystem /path/to/dir

Configuration Options

Each MCP server supports the following settings:

  • command (required): Command to start the server
  • args (optional): Arguments passed to the command
  • startup_timeout_sec (default 10 sec): Server startup timeout
  • tool_timeout_sec (default 60 sec): Tool execution timeout
  • enabled (default true): Temporarily disable without removing the configuration

Project-Level Configuration

Beyond the global ~/.codex/config.toml, you can also create .codex/config.toml in your project root to keep MCP settings project-specific. Note that project-level settings only take effect in trusted projects.

Practical tip: If you've already configured MCP servers in Claude Code, you just need to port the command and args to TOML format. Here's the filesystem server as an example:

Claude Code (~/.claude/settings.json):

{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/dir"]
    }
  }
}

Codex CLI (~/.codex/config.toml):

[mcp_servers.filesystem]
command = "npx"
args = ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/dir"]

The command and args are identical — only the format syntax differs.

Open-Source Ecosystem & Security Considerations — Apache-2.0, Sandboxing, Data Policy

Codex CLI's Apache-2.0 license means:

  • Commercial use allowed: Enterprises can deploy directly without additional licensing
  • Modifiable and forkable: You can build customized versions from the source
  • Patent protection: Apache-2.0 includes explicit patent grant clauses

Community Activity

Numbers as of April 2026:

  • GitHub stars: 75K+
  • Contributors: 428
  • Releases: 700+ (averaging nearly 2 releases per day)
  • npm monthly downloads: 14.53 million (March 2026)
  • Community forks: 10.7K+, including derivative projects like every-code and open-codex

These numbers (from public GitHub and npm data) indicate a project with sustained momentum, not an experiment OpenAI built and abandoned.

Sandbox Security Mechanism

Codex CLI's full-auto mode uses kernel-level sandboxing:

  • macOS: Seatbelt framework (works out of the box)
  • Linux / WSL2: bubblewrap (install required: sudo apt install bubblewrap)
  • Windows PowerShell: Native Windows sandboxing

According to the official Sandbox documentation, the sandbox blocks unnecessary network access by default and restricts file system access to the working directory scope.

Data Policy Considerations

When using API key mode, your code is sent to OpenAI's models for inference. OpenAI's API data policy states that API inputs are not used for model training (as of April 2026 policy). However, if you handle highly sensitive enterprise code, you should still assess whether this meets your organization's compliance requirements.

Codex CLI vs Aider vs Gemini CLI — The Full Open-Source Terminal Agent Landscape

Codex CLI isn't the only option for terminal AI agents in 2026. Here's how the main contenders are positioned:

Quick Positioning Comparison

Codex CLIAiderGemini CLIClaude Code
Open-source licenseApache-2.0Apache-2.0Apache-2.0Closed source
Model dependencyOpenAI (GPT-4o, codex-mini)Any LLMGoogle (Gemini)Anthropic (Claude)
LanguageRustPythonPythonTypeScript
Sandbox mechanismKernel-levelNoneLimitedApplication-layer
Free tierIncluded with ChatGPT PlusTool free, model costs separateHigher free quotaMax plan from $100/mo
Standout featurePerformance, sandbox, large communityModel-agnostic, most flexibleLarge context windowHighest code quality

How to Choose?

  • You rely on the OpenAI / GPT ecosystem: Codex CLI is the native choice
  • You want to use local models or mix models: Aider is the only tool supporting any LLM
  • You work with massive codebases and need long context: Gemini CLI has the largest context window
  • You prioritize code quality and complex reasoning: Claude Code still leads
  • You need the strictest sandbox isolation: Codex CLI's kernel-level approach is the strongest

These tools aren't mutually exclusive. From our hands-on experience, the most efficient approach is switching tools based on task type rather than using just one.

Is Codex CLI Right for You? — Your Checklist

Use this checklist to determine whether you should try Codex CLI now:

Good reasons to get started immediately:

  • You're a ChatGPT Plus or Pro subscriber (zero extra cost to get started)
  • You have substantial batch refactoring or script generation needs
  • You need a lightweight AI agent in your CI/CD pipeline
  • You have vendor lock-in concerns and want an open-source alternative
  • You already work in terminal-based workflows and don't rely on an IDE

Fine to wait:

  • Your primary need is complex architectural reasoning and precise debugging -> Claude Code is still the better choice for now
  • You need fully local, air-gapped AI -> Consider Aider + a local LLM
  • You mainly write frontend code and work inside an IDE -> IDE-integrated tools may be a better fit

Getting Started

You don't need to give up your existing tools. Test Codex CLI with a real task — such as refactoring a batch of files or generating tests — and if it performs well in your scenario, add it to your toolkit as a complementary option.

# 30-second start
npm install -g @openai/codex
codex auth
codex "Check this project's TODOs and generate an issue list"

The era of terminal coding agents has arrived, and Codex CLI is one of the lightest and most open options in this ecosystem. Whether you try it is up to you.

FAQ

Is Codex CLI free?

The Codex CLI tool itself is free and open source under Apache-2.0, but AI features require an OpenAI account. ChatGPT Plus subscribers can use it at no extra API cost via ChatGPT auth; in API key mode, codex-mini-latest is priced at $1.50/1M input tokens and $6.00/1M output tokens. Plus/Pro users receive $5/$50 in free API credits (valid for 30 days).

Do I need an OpenAI API key to use Codex CLI?

Not necessarily. ChatGPT Plus and Pro subscribers can sign in with their ChatGPT account using the codex auth command — no separate API key required. If you prefer pay-per-token flexibility, you can switch to API key mode by setting preferred_auth_method = 'apikey' in config.toml.

Can Codex CLI work offline?

No. Codex CLI requires a connection to OpenAI's model API since AI inference runs in the cloud. However, in full-auto mode the sandbox blocks unnecessary network access, ensuring your code isn't accidentally sent to any server other than OpenAI's.

Does Codex CLI support Windows?

Yes. In PowerShell, Codex CLI uses native Windows sandboxing; under WSL2 it uses the Linux sandbox implementation (bubblewrap must be installed first). macOS and Linux work out of the box.

Was this article helpful?