At $20/month, the AI subscription market has run out of pricing ideas. Claude Pro, ChatGPT Plus, Google AI Pro, and Perplexity Pro all land at the same number. GitHub Copilot Pro is half that. Microsoft’s offering looks similar on the pricing page until you add the mandatory M365 subscription underneath — at which point you’re looking at $24–51/user/month depending on your base license. That gap matters.
I tested all six subscriptions over three weeks on a 2024 MacBook Pro M3 Max, 48GB RAM, macOS Sequoia 15.2, running the same evaluation I use for every AI tool: a 50-prompt suite covering refactoring, bug identification, architectural decisions, document summarization, and research queries. I also stress-tested context windows by loading my production-adjacent 200-file TypeScript codebase progressively, watching for quality degradation as fill percentage climbed toward 80% and beyond. The goal wasn’t finding the most powerful model — it was finding the best return on $20/month for different types of working days.
Spoiler: the $20 tools are not equal. The differences land in context window size, what happens when you hit usage limits, and how honest vendors are about those limits before you hit them. On that last criterion, the industry as a whole is not covering itself in glory in mid-2026.
Quick Verdict
Scenario Pick Why Best overall value Claude Pro ($20/mo) 1M context, Claude Code included, only annual option in the tier Best feature breadth ChatGPT Plus ($20/mo) Web search, DALL-E, Deep Research, Codex in one sub Best for research Perplexity Pro ($20/mo) Real-time citations, multi-model selector, Deep Research Best developer value GitHub Copilot Pro ($10/mo) Half price, IDE-native, unlimited completions Best Workspace integration Google AI Pro ($19.99/mo) 1M context + Gemini across Gmail/Docs/Sheets Hardest to justify Microsoft 365 Copilot $24–51+/user/mo all-in, output quality trails free alternatives
Testing Methodology
All evaluation was done on a 2024 MacBook Pro M3 Max, 48GB RAM, macOS Sequoia 15.2 — my daily driver, not a controlled benchmark box. I ran each tool through a 50-prompt evaluation covering knowledge work (email drafting, document summarization, multi-step research) and technical tasks (refactoring, bug diagnosis, architecture questions). For context window performance, I loaded the same 180K-token technical document and the same 200-file TypeScript repo across all tools with a 1M context ceiling, watching whether output quality held as I filled more of the available context. Pricing and feature details were verified from vendor websites in late April and early May 2026. Usage limit behavior was observed by actually hitting the limits, not by reading documentation.
Pricing Head-to-Head
| Tool | Monthly | Annual Option | Free Tier | Context Window | All-In Cost |
|---|---|---|---|---|---|
| Claude Pro | $20/mo | $17/mo ($200/yr) | Yes | 1M tokens | $20–$17/mo |
| ChatGPT Plus | $20/mo | None | Yes (ads) | 32K / 256K Thinking | $20/mo |
| Google AI Pro | $19.99/mo | None shown | No | 1M tokens | $19.99/mo |
| Perplexity Pro | $20/mo | $16.67/mo ($200/yr) | Yes | Multi-model | $20–$16.67/mo |
| GitHub Copilot Pro | $10/mo | None | Yes (limited) | Varies by model | $10/mo |
| Microsoft 365 Copilot | $18–21/user/mo | Annual required | 60 credits | Varies | $24–51+/user/mo* |
*M365 Copilot requires a qualifying M365 subscription. M365 Business Basic ($6/user/mo) is the minimum. Most deployments run M365 Business Standard ($12.50/user/mo) or higher.
Feature Comparison
| Tool | Flagship Model | Context | Web Search | Coding | Unique Differentiator | Rating |
|---|---|---|---|---|---|---|
| Claude Pro | Opus 4.7 | 1M tokens | No (native) | Claude Code included | 128K max output, context compaction | 8.7/10 |
| ChatGPT Plus | GPT-5.5 Instant | 32K / 256K | Yes | Codex (capped) | Broadest integration ecosystem | 8.3/10 |
| Google AI Pro | Gemini 3.1 Pro | 1M tokens | Via Workspace | Code Assist | 5 TB storage, Workspace-native | 7.6/10 |
| GitHub Copilot Pro | Multi-model | Varies | No | Core product | IDE-native completions | 7.5/10 |
| Perplexity Pro | GPT-5.5/Claude/Gemini | Multi-model | Yes (core) | No | Real-time citations, model selector | 7.0/10 |
| M365 Copilot | GPT-4o class | Varies | Via M365 | Limited | M365 app embedding | 5.4/10 |
Claude Pro — Best Overall at $20/Month
Best for: Knowledge workers, developers, anyone doing sustained multi-step work on large documents or codebases
Claude Pro at $20/month — or $17/month on annual billing — gives you Claude Opus 4.7 with a 1M token context window and 128K max output tokens. Those numbers matter more in practice than most spec comparisons suggest.
I loaded the full 200-file TypeScript repo into Claude via file uploads during context window testing. At approximately 70% fill, I asked it to trace a data access pattern across the codebase and identify where two competing abstraction layers were causing inconsistent behavior. It returned a specific, accurate analysis pointing to four concrete files. That’s not a task you can run with a 32K context window. The gap between 1M and 32K tokens isn’t marginal — it determines whether certain classes of work are tractable at all.
The 128K max output token ceiling is worth noting separately. GPT-5.5 on Plus maxes at 8K for standard output. The difference shows up on tasks requiring long structured documents, exhaustive code reviews, or detailed multi-section responses without asking the model to continue.
Claude Code is included in the Pro plan web app. For developers at $20/month, this is an asymmetric advantage over competitors — you get a capable agentic coding tool without an additional subscription. Context compaction beta reduces quality degradation in long coding sessions, which is a real friction point I hit repeatedly on other tools during the replace-my-IDE test.
Honestly, the April 2026 incident needs to be on record. On April 21, Anthropic silently removed Claude Code from the Pro plan pricing page — no email, no changelog, no announcement. Developers noticed on Reddit and Hacker News within hours. The response from the community was fast: “If Claude Code is going away for Pro users, I can’t recommend Claude anymore.” Anthropic reversed the change the next morning and called it “a mistake,” confirming it was a ~2% A/B test. The feature is intact. The communication failure was real and matters when you’re evaluating a vendor relationship, not just a feature set.
The Max plans ($100/month for 5x usage, $200/month for 20x) are monthly-only — no annual discount as of May 2026. If you’re a power user hitting daily limits, that’s a meaningful difference from the Pro tier which does offer annual billing.
Pricing tiers:
- Free: Rate-limited access, limited model tier
- Pro: $20/mo or $17/mo annual ($200/yr) — Opus 4.7, 1M context, Claude Code web app
- Max 5x: $100/mo (monthly only, no annual)
- Max 20x: $200/mo (monthly only, no annual)
- Team: $25/user/mo annual — Claude Code premium seat at $150/mo additional
- Enterprise: Custom pricing, custom models, advanced security
Pros:
- 1M token context that holds quality at 80%+ fill — verified in testing against the TypeScript repo
- 128K max output tokens — roughly 15x higher than ChatGPT Plus standard
- Claude Code included in the $20 Pro plan web app
- Only tool in the $20 tier offering annual billing discount ($17/mo)
- Context compaction beta handles quality degradation in long sessions
- Managed Agents with cross-session memory in public beta (May 2026)
Cons:
- No native web search in the standard Pro interface
- April 2026 undisclosed A/B test of removing Claude Code indicates future pricing experiments possible
- Max plan tiers have no annual discount option — unusual among competitors
- Dynamic usage limits are undocumented; hard to predict when you’ll hit them
- Business users wanting Claude Code at scale need a $150/mo Team premium seat
Rating: 8.7/10
ChatGPT Plus — Strong Model, Constrained Context
Best for: General-purpose users who need web search, image generation, and research in one subscription
ChatGPT Plus is $20/month with no annual discount option. As of May 5, 2026, the default model is GPT-5.5 Instant — OpenAI’s new standard, positioned as having reduced hallucination rates in law, medicine, and finance compared to GPT-5.3 Instant. The flagship GPT-5.5 (released April 23) is available on manual selection for harder reasoning tasks.
The case for ChatGPT Plus is feature breadth. Web search is native and doesn’t require manual activation. DALL-E image generation is included. Codex handles code generation. Deep Research runs multi-step web synthesis with structured outputs. Memory across sessions works reasonably well. No single competitor at $20/month bundles all of that.
The structural weakness is the default context window. 32K tokens on GPT-5.5 Instant. At the same $20/month, Claude Pro and Google AI Pro both offer 1M. During my context stress test, I filled the standard Plus context ceiling in under forty minutes of working with the TypeScript repo — and that’s with mindful prompting, not exhaustive file loading.
A 256K Thinking mode exists but requires manual activation per conversation, consumes from a separate weekly cap of 3,000 uses, and doesn’t surface a countdown anywhere visible in the interface. I hit the Thinking mode weekly cap during testing and only discovered it when responses silently dropped in quality.
The silent downgrade behavior was the most disruptive thing I encountered across all six tools. Once you cross 160 messages per 3-hour window on the flagship model, ChatGPT switches to GPT-5.5 mini. The model selector in the interface changes, but there’s no alert. Mid-task, I had an architectural question answered with noticeably shorter, less structured reasoning — and I traced it back to the mini downgrade only after checking the model indicator. For a detailed model quality comparison across twelve specific tasks, ChatGPT Plus vs Claude Pro 2026: An Honest Head-to-Head After 4 Weeks of Real Work covers the divergence in granular detail.
Pricing tiers:
- Free: GPT-5.5 mini with ads
- Go: $8/mo — global tier, basic access (January 2026)
- Plus: $20/mo — GPT-5.5 Instant, 160 messages/3 hrs before downgrade
- Pro $100: $100/mo — heavier Codex use, launched April 9 2026
- Pro $200: $200/mo — 20x usage vs Plus
- Team: $25/user/mo annual
Pros:
- GPT-5.5 Instant is fast and handles conversational and factual queries well
- Broadest feature set in the $20 tier: web search, DALL-E, Codex, Deep Research, memory
- Native web search without manual activation
- Up to 80 file uploads per 3-hour window
- GPT-5.5 Thinking mode available (manual activation, 256K context)
Cons:
- 32K default context — smallest window in this comparison at the same price point
- Silent downgrade to GPT-5.5 mini when message limits hit; no proactive warning
- 256K Thinking mode requires manual activation and has a separate 3,000/week cap
- No annual billing discount option
- Upload quota not surfaced in interface — discovered only by hitting it
Rating: 8.3/10
GitHub Copilot Pro — Best Per-Dollar, Pre-June
Best for: Developers using VS Code, JetBrains, or Visual Studio who want IDE-native AI without browser context switching
GitHub Copilot Pro is $10/month — half the price of every other tool on this list. The core value proposition is IDE embedding: unlimited inline code completions and next-edit suggestions that anticipate follow-on changes when you modify a function, all without leaving the editor.
I ran the replace-my-IDE challenge with Copilot Pro for a full workday. For routine backend work — function signatures, boilerplate expansion, unit test scaffolding — the inline completions are fast and accurate. The flow-state benefit of not switching to a browser tab is real and hard to put in a feature table. Pro accesses multiple models including Claude Opus 4.x, GPT-5.5, and Gemini 3.1 for chat. The multi-model flexibility on a single $10/month subscription is unusual value.
Here’s the complication. On April 27, 2026, GitHub announced that all Copilot plans are transitioning to usage-based AI Credits billing for premium model chat and agentic features effective June 1, 2026. The subscription price stays the same. Token consumption replaces the prior request-based model. Critically, per-token prices for each AI model hadn’t been published as of late April 2026 — nobody outside GitHub knows what a heavy Copilot Workspace session will cost after June 1.
Developers in GitHub Discussion #192948 were unambiguous: “You will get less, but pay the same price.” Another developer described the shift from request-based to token-based billing as something that “could sober up an alcoholic from mere shock.” The billing opacity is the genuine risk. Code completions remain unlimited on all paid plans regardless of the transition — that anchor holds. The exposure is specifically in agentic features and multi-model chat sessions, where consumption becomes unpredictable.
The March 2026 PR tip injection incident is separately worth knowing: Copilot inserted promotional tips into over 1.5 million pull requests, accelerating developer migration to Cursor and Claude Code. For context on the tools that benefited from that migration, GitHub Copilot vs Claude Code 2026: Tested Head-to-Head, One Wins and Copilot vs Cursor vs Claude 2026: 5 AI Coding Assistants Tested & Ranked are worth reading before deciding.
Pricing tiers:
- Free: 2,000 completions + 50 premium requests/month
- Pro: $10/mo — unlimited completions, $10/mo AI Credits for premium model chat
- Pro+: $39/mo — all available models, larger credit allowance
- Business: $19/user/mo — centralized billing, IP indemnification, SAML SSO
- Enterprise: Custom — custom model training on company codebase
- Student: Free with verification
Pros:
- $10/month — most affordable paid option in this comparison
- Unlimited code completions remain flat-rate post-June billing transition
- IDE-native: VS Code, JetBrains, Visual Studio, GitHub.com — no browser tab required
- Next-edit suggestions catch follow-on changes after function modifications
- Multi-model access (Claude Opus 4.x, GPT-5.5, Gemini 3.1) on one subscription
- Student tier free with verification
Cons:
- June 1 2026 transition to token-based billing — per-token prices unpublished as of late April 2026
- Agentic feature cost under new billing: genuinely unknown until June 1
- March 2026 PR tip injection affected 1.5M+ pull requests — trust incident not fully recovered
- Multi-file agentic reasoning lags Cursor and Claude Code on complex tasks
- Copilot code review consuming GitHub Actions minutes from June 1 — another hidden cost surface
Rating: 7.5/10
Google AI Pro — Bundled Value, Ecosystem Dependent
Best for: Teams already embedded in Google Workspace who want AI in the tools they’re already using
Google AI Pro is $19.99/month — one cent under the $20 floor, which is either deliberate or an accident. The product is Gemini 3.1 Pro with 1M token context, integrated natively across Gmail, Docs, Sheets, Slides, and Meet, plus 5 TB of Google One storage, Gemini Code Assist, and Veo 3.1 video generation.
The Workspace integration is the genuine differentiator. If you live in Google Docs, the ability to ask Gemini to draft, rewrite, or analyze without copy-pasting into a separate chat window removes real friction. I spent two days running client deliverables through Docs with Gemini active — for document-first workflows, the embedded experience is cleaner than any browser-tab alternative.
A note on model naming: the consumer app uses “Gemini 3.1 Pro” while the API uses “Gemini 2.5 Pro” branding. Google hasn’t confirmed whether these are the same model under different marketing names or distinct versions. I can’t verify this independently, and that’s the kind of opacity that erodes trust with technically-minded users.
For pure coding tasks, Gemini 3.1 Pro is the weakest performer in this comparison at the 1M context tier. My coding benchmark produced correct but verbose outputs, and multi-file architectural questions that Claude handled cleanly required more prompting scaffolding with Gemini. The Workspace advantage doesn’t transfer if your workday is code rather than documents.
The undocumented ~100 Pro prompts per day limit is a legitimate operational concern. I discovered it mid-session on a research-heavy day when responses started degrading without explanation. No countdown, no warning before the cap, and thin documentation around when it applies.
Pricing tiers:
- Google AI Plus: $7.99/mo — lighter usage, 2 TB storage
- Google AI Pro: $19.99/mo — Gemini 3.1 Pro, 1M context, 5 TB storage, 1,000 monthly AI credits
- Google AI Ultra: $249.99/mo — Gemini 2.5 Deep Think, 25K credits, YouTube Premium, $100 Google Cloud credits
Pros:
- 1M token context at $19.99/month — matches Claude Pro
- Workspace integration native: Gmail, Docs, Sheets, Slides, Meet without tab switching
- 5 TB Google One storage included — genuine value if you’re paying for storage separately
- Veo 3.1 video generation and unlimited slide generation bundled
- Gemini Code Assist included for development workflows
Cons:
- Undocumented ~100 daily prompt cap; interface shows no warning before degradation
- Weakest coding performance of the 1M-context tools in this comparison
- AI credits accounting is opaque — unclear which actions consume credits vs. which are unlimited
- “Gemini 3.1 Pro” vs “Gemini 2.5 Pro” naming inconsistency between consumer and API
- Ultra at $249.99/month is a steep jump with limited incremental value for non-developer users
- No annual pricing option shown
Rating: 7.6/10
Perplexity Pro — Best Research Tool, Worst Billing Track Record
Best for: Researchers, analysts, and journalists who need real-time web sourcing with traceable citations
Perplexity Pro is $20/month, or ~$16.67/month on annual billing ($200/year). It’s a fundamentally different product from the other tools here: an answer engine built around real-time web access and inline source citations, not a model chat interface. Every response draws from current web sources with traceable references — you’re not working from training data with a knowledge cutoff.
The multi-model selector is a genuine advantage nobody else in this comparison offers. Pro users choose between GPT-5.5, Claude Opus 4.x, Gemini 3.1 Pro, or Perplexity’s own Sonar models per query. I used it to combine Claude’s analytical depth with Perplexity’s real-time sourcing for a competitive intelligence task — asking Claude-based Sonar to synthesize market developments from the last three months. That’s a combination not available on any single-model subscription.
Deep Research for multi-step synthesis is the standout feature. I ran a five-part competitive landscape analysis: it pulled from multiple sources, identified where they conflicted, and structured its uncertainty appropriately in the output. Citation-first architecture means you can trace every claim to a source rather than trusting a model that may be confabulating confidently.
Here’s where I can’t recommend Perplexity Pro without qualification: the billing and support situation is bad enough to affect subscription decisions. Trustpilot rating is approximately 1.6 out of 5 from 180 reviews, dominated by a consistent pattern — surprise subscription charges, difficulty canceling, payment glitches causing account downgrades without warning, conversation history lost, and support that is chatbot-only with no human escalation path. “Absolutely abysmal customer support — a payment glitch caused my account to downgrade without warning and my conversation history was wiped clean” is a representative Trustpilot entry, not an outlier.
Thread management is separately broken for a research product. No folders, no tags, no search within your own history. After four weeks of daily research use, finding a thread from two weeks ago required scrolling an undifferentiated reverse-chronological list. For AI tools suited to academic and technical research depth, 6 AI Research Paper Tools Tested 2026: Elicit vs Consensus vs Scite (Ranked) covers the specialized end of the market.
Pricing tiers:
- Free: Limited Pro searches
- Pro: $20/mo or $16.67/mo annual ($200/yr)
- Max: $200/mo — individual power user tier
- Education Pro: $10/mo — verified students and faculty only
- Enterprise Pro: $40/user/mo; Enterprise Max: $325/user/mo ($3,250/yr)
Pros:
- Real-time web sourcing with inline citations as the core product — not bolted on
- Multi-model selector: GPT-5.5, Claude Opus 4.x, Gemini 3.1 Pro, Sonar on one subscription
- Deep Research for autonomous multi-step synthesis with structured outputs
- Annual billing saves $40/year — one of two options with this in the comparison
- Education Pro at $10/month is the best-value AI subscription for verified students/faculty
Cons:
- Trustpilot ~1.6/5 (180 reviews) — billing failure and support quality issues are a documented pattern
- Thread management is non-functional: no search, folders, or tags for research archives
- Enterprise support is chatbot-only; no human escalation path documented
- Payment glitches have caused account downgrades and conversation history loss
- February 2026 subscription-first pivot is recent; long-term model sustainability unproven
Rating: 7.0/10
Microsoft 365 Copilot — Integration Without Equivalence
Best for: Large organizations where M365 is mandatory infrastructure and centralized IT deployment is required
Microsoft 365 Copilot is $18/user/month promotional through June 30, 2026 — rising to $21/user/month from July 1. Enterprise tiers are $30/user/month. On paper, it fits this comparison. In practice, it cannot be purchased standalone. It requires a qualifying M365 subscription: minimum M365 Business Basic at $6/user/month, more commonly M365 Business Standard at $12.50/user/month. Real all-in cost: $24–51+/user/month. That’s not a $20 tool.
The integration case is real. Copilot lives inside Word, Excel, PowerPoint, Teams, Outlook, and OneNote. If your organization’s entire knowledge work flows through these apps, having AI that drafts directly in your document rather than a browser tab has genuine friction-reduction value. Copilot Pages for collaborative AI documents is a legitimate team use case. Central admin controls, audit logs, and compliance features matter at enterprise scale.
The output quality problem is what makes it hard to recommend. User reviews consistently describe Copilot as “nowhere near as helpful as free versions of Claude and ChatGPT.” My testing was consistent: equivalent document summarization tasks produced noticeably more formulaic, hedged responses in Copilot compared to Claude Pro at the same prompt. Paying $18–30/user/month for AI that underperforms the free tier of a competitor requires a specific justification — “IT controls the toolchain and switching isn’t an option” — not “the output quality is worth it.”
Two 2026 events damaged the product’s standing materially. On April 15, unlicensed users lost all Copilot access in Word, Excel, PowerPoint, and OneNote — removing Copilot Chat functionality that had previously been available without charge, with inadequate communication before the change. A class action arbitration has been filed over alleged auto-enrollment, with one Microsoft Learn Q&A user summarizing it as “The copilot fiasco — unwanted, forced to pay, and terrible execution.”
For workflow automation that doesn’t require a mandatory platform subscription, 7 AI Business Automation Tools Tested 2026: Zapier vs Make vs Power Automate covers the alternatives.
Pricing tiers (plus required M365 base):
- M365 Business Basic: $6/user/mo (minimum qualifying subscription)
- M365 Copilot Business: $18/user/mo promotional (through June 30, 2026), then $21/user/mo
- M365 Copilot Enterprise: $30/user/mo annual
- All-in minimum: $24/user/mo; typical deployment: $30–51/user/mo
Pros:
- Native integration across Word, Excel, PowerPoint, Teams, Outlook — no app switching
- Copilot Pages for collaborative AI-generated document creation
- Centralized admin, audit logs, SAML SSO, compliance features for IT teams
- Makes operational sense when M365 is already mandatory infrastructure
- Copilot Chat included for basic use with eligible M365 subscriptions
Cons:
- Cannot be purchased standalone — real cost is $24–51+/user/month all-in
- Output quality widely reported as inferior to free-tier Claude and ChatGPT
- April 2026 removal of previously free Copilot Chat features — inadequate advance notice
- Class action arbitration over auto-enrollment allegations
- 60 AI credits/month on Personal/Family plans insufficient for sustained daily use
- Promotional pricing expires June 30, 2026 — price rises $3/user/mo on July 1
Rating: 5.4/10
Use Case Recommendations
Freelancers and solopreneurs: Claude Pro at $20/month ($17/month annual). The 1M context window handles long client briefs and documents without chunking, the annual discount is the only one at this price tier, and Claude Code is included for light development work. The best single subscription for most individual knowledge workers. See also Best AI Tools for Freelancers 2026: Top 5 Save 6+ Hours Per Week for tool-stack recommendations beyond just the AI assistant.
Developers who write code daily: GitHub Copilot Pro at $10/month as the base for IDE-native completions, with the caveat that June 2026 usage-based billing makes agentic and heavy chat use unpredictable in cost until per-token prices are published. For multi-file agentic reasoning beyond what completions and light chat deliver, Copilot vs Cursor vs Claude 2026: 5 AI Coding Assistants Tested & Ranked covers the performance gap.
Content creators and marketers: ChatGPT Plus at $20/month for the breadth of tools rather than raw model quality. Web search, DALL-E, and Deep Research in one subscription are useful for research-to-creation workflows. The context window limitation matters less for content creation than for document analysis. For writing-specific AI tool quality, Best AI Writing Tools 2026: 7 Tested, Ranked by Real Output Quality covers specialist tools that outperform general-purpose assistants on specific formats.
Researchers and analysts: Perplexity Pro’s real-time sourcing and multi-model selector make it the best pure research tool at this tier — most useful as a layer on top of a primary AI subscription rather than a replacement for it. The billing track record warrants using a card with spending alerts and monitoring charges actively. For academic contexts, the Education Pro tier at $10/month is compelling if you qualify.
Teams on Google Workspace: Google AI Pro at $19.99/month. The 1M context and Workspace integration reduce friction for document-heavy teams, and the bundled 5 TB storage adds legitimate value for organizations paying for Google One storage separately. Know the ~100 daily prompt cap before deploying to heavy users.
Enterprise M365 organizations: M365 Copilot has the only native app integration story for Microsoft’s ecosystem. The value case requires honest accounting of total cost (base M365 + Copilot), realistic expectations about output quality vs. standalone AI tools, and organizational context where IT manages the toolchain. Don’t buy it because the output quality is compelling — buy it only when the integration architecture is the decision criterion.
Pricing Deep Dive: All Tiers
Claude (Anthropic) — May 2026
| Plan | Price | Annual | Context | Key Features |
|---|---|---|---|---|
| Free | $0 | — | Limited | Rate-limited Sonnet 4.6 access |
| Pro | $20/mo | $17/mo ($200/yr) | 1M tokens | Opus 4.7, Claude Code web app, context compaction beta |
| Max 5x | $100/mo | None (monthly only) | 1M tokens | 5x usage vs Pro |
| Max 20x | $200/mo | None (monthly only) | 1M tokens | 20x usage vs Pro |
| Team | $25/user/mo | Annual required | 1M tokens | Collaboration, admin; Claude Code seat: +$150/mo |
| Enterprise | Custom | Custom | 1M tokens | Custom models, SSO, Claude Security (code scanning) |
ChatGPT (OpenAI) — May 2026
| Plan | Price | Annual | Context | Key Features |
|---|---|---|---|---|
| Free | $0 | — | 32K | GPT-5.5 mini, ad-supported |
| Go | $8/mo | — | 32K | Basic expanded access, global tier |
| Plus | $20/mo | None | 32K / 256K Thinking | GPT-5.5 Instant default, Codex, Deep Research, DALL-E |
| Pro $100 | $100/mo | None | 256K+ | 5x usage vs Plus, launched April 9 2026 |
| Pro $200 | $200/mo | None | 256K+ | 20x usage vs Plus |
| Team | $25/user/mo | Annual required | — | Business collaboration |
GitHub Copilot — Pre/Post June 2026
| Plan | Price | Completions | Chat/Agentic | Key Notes |
|---|---|---|---|---|
| Free | $0 | 2,000/mo | 50 premium requests | Basic completions only |
| Pro | $10/mo | Unlimited | $10 AI Credits (post-June: token-metered) | All paid IDEs |
| Pro+ | $39/mo | Unlimited | $39 AI Credits, all models | Maximum model access |
| Business | $19/user/mo | Unlimited | $19 AI Credits | Centralized billing, IP indemnification, SAML SSO |
Google AI — May 2026
| Plan | Price | Context | Storage | Key Features |
|---|---|---|---|---|
| AI Plus | $7.99/mo | — | 2 TB | Basic Gemini access |
| AI Pro | $19.99/mo | 1M tokens | 5 TB | Gemini 3.1 Pro, Code Assist, Veo 3.1, 1,000 monthly AI credits |
| AI Ultra | $249.99/mo | 1M+ tokens | 5 TB | Deep Think, 25K credits, $100/mo Google Cloud, YouTube Premium |
Perplexity — May 2026
| Plan | Price | Annual | Key Features |
|---|---|---|---|
| Free | $0 | — | Limited Pro searches, citations |
| Pro | $20/mo | $16.67/mo ($200/yr) | Unlimited searches, multi-model selector, Deep Research |
| Max | $200/mo | — | Power user individual tier |
| Education Pro | $10/mo | — | Verified students/faculty only |
| Enterprise Pro | $40/user/mo | — | Business tier |
| Enterprise Max | $325/user/mo | $3,250/yr | Maximum enterprise |
The Transparency Problem Nobody Is Solving
One observation that deserves its own space: every tool in this comparison has a documented user complaint about pricing opacity, and the problem is getting worse rather than better in mid-2026.
GitHub Copilot announced a billing model transition without publishing per-token prices. Claude Pro has undocumented dynamic usage limits. ChatGPT silently downgrades to mini without a proactive alert. Google AI Pro has an underdocumented daily cap. Perplexity has billing failures documented across hundreds of reviews. Microsoft Copilot has opaque credit consumption with no countdown on remaining limits.
This is an industry-wide symptom of AI inference costs that are variable and hard to communicate cleanly to consumers. None of these companies have fully solved how to expose usage economics honestly without either scaring users or hiding the limits until they hit them. Until they do: set spending alerts on any new AI subscription payment method, monitor actual monthly charges, and be especially cautious with annual commitments before you’ve validated a tool’s behavior at the limits.
Final Verdict
Overall winner: Claude Pro ($20/month)
Claude Opus 4.7 with 1M token context, the only annual billing discount in the $20 tier ($17/month), Claude Code included in the web app, and consistently strong output quality across document analysis and coding tasks. The April 2026 undisclosed A/B test is a data point about how Anthropic communicates changes — one worth tracking. The tool itself remains the best-value general-purpose AI subscription at this price tier as of May 2026. For the head-to-head quality verdict across specific task types, ChatGPT Plus vs Claude Pro 2026: Which $20/Month AI Subscription Is Actually Worth It? covers the twelve-task comparison that ended with Claude winning eight.
Runner-up: ChatGPT Plus ($20/month)
Better feature breadth than Claude Pro — web search, DALL-E, Deep Research, Codex without add-ons. The 32K standard context is the structural limitation that drops it to second place when document analysis or large codebase work enters the picture. If your workflow is conversational or content-creation-focused rather than document-heavy, the gap vs. Claude Pro narrows substantially.
Best developer value: GitHub Copilot Pro ($10/month)
Half price, IDE-native, and inline completions that remain unlimited even after the June billing transition. The June 2026 uncertainty is specifically about agentic features and heavy multi-model chat — not about the completions that drive daily flow-state coding. At $10/month, it’s the default starting point for any developer not already committed to a competitor. For broader value coverage across tools under $20, 8 AI Tools Under $20/Month Tested in 2026: Best Value Subscriptions Ranked covers additional categories. For a productivity ROI lens rather than feature comparison, 7 AI Productivity Tools Tested in 2026: Ranked by Hours Saved per Week gives the hours-per-week analysis.
Frequently Asked Questions
Which AI subscription gives the best value at $20/month in 2026?
Claude Pro edges ChatGPT Plus for most working professionals as of May 2026. The 1M token context window on Claude Opus 4.7 is the key differentiator at this price — it handles large documents and codebases where ChatGPT Plus’s 32K standard context hits a ceiling. Claude also offers annual billing at $17/month, the only discount available at this tier. If web search and image generation are core requirements, ChatGPT Plus’s broader integration suite closes the gap.
Why does ChatGPT Plus only offer 32K context when Claude Pro offers 1M at the same price?
ChatGPT Plus defaults to GPT-5.5 Instant at 32K tokens. A 256K Thinking mode exists but requires manual per-conversation activation, counts against a separate weekly cap of 3,000 uses, and doesn’t proactively notify you when approaching that limit. OpenAI has not offered a 1M-context model to Plus subscribers as of May 2026. For document-heavy or codebase-heavy workflows, this gap is operationally significant — not just a spec comparison.
What is the actual total cost of Microsoft 365 Copilot?
The advertised $18/user/month is promotional through June 30, 2026 — but it’s not the real entry price. M365 Copilot cannot be purchased standalone; it requires a qualifying M365 subscription. M365 Business Basic at $6/user/month is the minimum, putting all-in cost at $24/user/month. Most business deployments run on M365 Business Standard at $12.50/user/month or higher, bringing true spend to $30.50–51+/user/month before any enterprise tier. The promotional rate becomes $21/user/month from July 1, 2026.
Is GitHub Copilot’s June 2026 billing change a reason to avoid subscribing?
Not for inline code completions — those remain unlimited on all paid plans regardless of the June 1 billing transition. The risk concentrates in premium model chat and Copilot Workspace agentic sessions, which shift to per-token metered billing. GitHub has not published per-token prices for each AI model as of late April 2026, making actual costs for heavy agentic use impossible to calculate before June 1. For completions-focused developers with light chat usage, the risk is low. For heavy Copilot Workspace users, the exposure is genuine and unquantified.
What makes Perplexity Pro different from ChatGPT Plus with web search?
Perplexity’s architecture is built around real-time web access as the primary product, not a bolt-on feature. Every response cites inline sources by default. The multi-model selector lets Pro users choose between GPT-5.5, Claude Opus 4.x, Gemini 3.1 Pro, and Sonar — no other $20 subscription offers this flexibility. The significant caveat is a Trustpilot rating of approximately 1.6/5 from 180 reviews documenting billing failures, difficulty canceling, and chatbot-only support. ChatGPT Plus web search is more reliable for users who can’t absorb billing support risk.
Is Claude Code included in the standard $20/month Claude Pro plan?
Yes, as of May 2026. Claude Code via the web app is included in Claude Pro. Anthropic ran a brief undisclosed A/B test on April 21–22 that temporarily removed Claude Code from the Pro plan pricing page — reversed the next morning after user backlash, confirmed as an approximately 2% test. For business users wanting Claude Code at scale, the Team plan requires a $150/month premium seat — a steep step up from $20 Pro. Enterprise pricing is custom.
Should developers pay for GitHub Copilot and Claude Pro simultaneously?
For developers whose daily work involves both writing code and analyzing documents or complex multi-step reasoning, the dual-tool stack costs $30/month and outperforms any single $20 subscription for that use case. Copilot Pro at $10/month handles IDE-native inline completions; Claude Pro at $20/month handles long-context analysis and agentic tasks requiring broader reasoning. The emerging consensus in developer communities is roughly $30/month as the productive dual-tool spend — Copilot for always-on completions, Claude Code or Claude Pro for deeper sessions.