Anthropic's Claude Opus 4.7 release on April 16 propelled it to the top of the LMSYS Chatbot Arena leaderboard—the gold standard for large language model rankings—with a leading Elo score around 1505, surpassing rivals like OpenAI's GPT-5.5 and Google's Gemini 3.1 Pro in crowd-sourced blind tests across coding, reasoning, and general capabilities. This sustained dominance, validated by real-user battles and third-party benchmarks, drives trader consensus to a 100% implied probability for Anthropic as the best AI model by April's end, reflecting skin-in-the-game bets on its superior performance in sustained tasks and agentic workflows. While near-certain, a surprise late release or leaderboard recalculation from competitors could theoretically challenge it before resolution.
Resumen experimental generado por IA con datos de Polymarket. Esto no es asesoramiento de trading y no influye en cómo se resuelve este mercado. · ActualizadoAnthropic 100.0%
OpenAI <1%
xAI <1%
Baidu <1%
$21,423,420 Vol.
$21,423,420 Vol.

Anthropic
100%

OpenAI
<1%

xAI
<1%

Baidu
<1%

Amazon
<1%

Mistral
<1%

Meituan
<1%

Meta
<1%

<1%

Alibaba
<1%

ByteDance
<1%

Moonshot
<1%

Z.ai
<1%

DeepSeek
<1%

Microsoft
<1%
Anthropic 100.0%
OpenAI <1%
xAI <1%
Baidu <1%
$21,423,420 Vol.
$21,423,420 Vol.

Anthropic
100%

OpenAI
<1%

xAI
<1%

Baidu
<1%

Amazon
<1%

Mistral
<1%

Meituan
<1%

Meta
<1%

<1%

Alibaba
<1%

ByteDance
<1%

Moonshot
<1%

Z.ai
<1%

DeepSeek
<1%

Microsoft
<1%
Results from the "Rank" column under the "Text Arena | Overall" Leaderboard tab at https://lmarena.ai/leaderboard/text with style control off will be used to resolve this market.
Models will be ordered primarily by their leaderboard rank at the market’s check time. If two or more models are tied on rank, they will be ordered by their Arena score, including any underlying, unrounded, granular values reflected in the data below the leaderboard. If a tie remains, alphabetical order of company names as listed in this market group will be used as a final tiebreaker (e.g., if the two models are tied by exact arena score, “Google” would be ranked ahead of “xAI”). This market will resolve based on the company that occupies first place under this ranking system.
The resolution source for this market is the Chatbot Arena LLM Leaderboard found at https://lmarena.ai/. If this resolution source is unavailable at check time, this market will remain open until the leaderboard comes back online and will resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.
Mercado abierto: Mar 20, 2026, 4:17 PM ET
Resolver
0x69c47De9D...Resultado propuesto: Sí
Ventana de disputas
Final
Results from the "Rank" column under the "Text Arena | Overall" Leaderboard tab at https://lmarena.ai/leaderboard/text with style control off will be used to resolve this market.
Models will be ordered primarily by their leaderboard rank at the market’s check time. If two or more models are tied on rank, they will be ordered by their Arena score, including any underlying, unrounded, granular values reflected in the data below the leaderboard. If a tie remains, alphabetical order of company names as listed in this market group will be used as a final tiebreaker (e.g., if the two models are tied by exact arena score, “Google” would be ranked ahead of “xAI”). This market will resolve based on the company that occupies first place under this ranking system.
The resolution source for this market is the Chatbot Arena LLM Leaderboard found at https://lmarena.ai/. If this resolution source is unavailable at check time, this market will remain open until the leaderboard comes back online and will resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.
Resolver
0x69c47De9D...Resultado propuesto: Sí
Ventana de disputas
Final
Anthropic's Claude Opus 4.7 release on April 16 propelled it to the top of the LMSYS Chatbot Arena leaderboard—the gold standard for large language model rankings—with a leading Elo score around 1505, surpassing rivals like OpenAI's GPT-5.5 and Google's Gemini 3.1 Pro in crowd-sourced blind tests across coding, reasoning, and general capabilities. This sustained dominance, validated by real-user battles and third-party benchmarks, drives trader consensus to a 100% implied probability for Anthropic as the best AI model by April's end, reflecting skin-in-the-game bets on its superior performance in sustained tasks and agentic workflows. While near-certain, a surprise late release or leaderboard recalculation from competitors could theoretically challenge it before resolution.
Resumen experimental generado por IA con datos de Polymarket. Esto no es asesoramiento de trading y no influye en cómo se resuelve este mercado. · Actualizado
Cuidado con los enlaces externos.
Cuidado con los enlaces externos.
Preguntas frecuentes