Anthropic's Claude Opus 4.7, released April 16, has driven near-unanimous trader consensus at 99% implied probability by dominating the LM Arena Code leaderboard—the market's resolution source—with a commanding lead in agentic coding on real-world tasks like building live websites and apps. The large language model delivered double-digit gains on benchmarks such as SWE-bench Verified (87.6%) and CursorBench (70%), excelling in multi-step planning, tool calls, and self-verification, outpacing OpenAI's GPT-5.4 and Google's Gemini 3.1 Pro by 37-46 Elo points. No rival updates have closed the gap in the past two weeks. Realistic challenges include a surprise pre-deadline submission from competitors like OpenAI that rapidly accrues superior user votes on the leaderboard.
Resumen experimental generado por IA con datos de Polymarket. Esto no es asesoramiento de trading y no influye en cómo se resuelve este mercado. · ActualizadoAnthropic 99.3%
OpenAI <1%
DeepSeek <1%
Moonshot <1%
$256,546 Vol.
$256,546 Vol.

Anthropic
99%

OpenAI
1%

DeepSeek
1%

Moonshot
<1%

xAI
<1%

ByteDance
<1%

Baidu
<1%

Alibaba
<1%

Mistral
<1%

Meituan
<1%

<1%

Amazon
<1%

Z.ai
<1%
Anthropic 99.3%
OpenAI <1%
DeepSeek <1%
Moonshot <1%
$256,546 Vol.
$256,546 Vol.

Anthropic
99%

OpenAI
1%

DeepSeek
1%

Moonshot
<1%

xAI
<1%

ByteDance
<1%

Baidu
<1%

Alibaba
<1%

Mistral
<1%

Meituan
<1%

<1%

Amazon
<1%

Z.ai
<1%
Results from the "Rank" column under the "Text Arena | Coding" Leaderboard tab at https://arena.ai/leaderboard/text/coding-no-style-control with style control off will be used to resolve this market.
Models will be ordered primarily by their leaderboard rank at the market’s check time. If two or more models are tied on rank, they will be ordered by their Arena score, including any underlying, unrounded, granular values reflected in the data below the leaderboard. If a tie remains, alphabetical order of company names as listed in this market group will be used as a final tiebreaker (e.g., if the two models are tied by exact arena score, “Google” would be ranked ahead of “xAI”). This market will resolve based on the company that occupies first place under this ranking system.
The resolution source for this market is the Chatbot Arena LLM Leaderboard found at https://lmarena.ai/. If this resolution source is unavailable at check time, this market will remain open until the leaderboard comes back online and will resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.
Mercado abierto: Apr 2, 2026, 5:39 PM ET
Resolver
0x69c47De9D...Results from the "Rank" column under the "Text Arena | Coding" Leaderboard tab at https://arena.ai/leaderboard/text/coding-no-style-control with style control off will be used to resolve this market.
Models will be ordered primarily by their leaderboard rank at the market’s check time. If two or more models are tied on rank, they will be ordered by their Arena score, including any underlying, unrounded, granular values reflected in the data below the leaderboard. If a tie remains, alphabetical order of company names as listed in this market group will be used as a final tiebreaker (e.g., if the two models are tied by exact arena score, “Google” would be ranked ahead of “xAI”). This market will resolve based on the company that occupies first place under this ranking system.
The resolution source for this market is the Chatbot Arena LLM Leaderboard found at https://lmarena.ai/. If this resolution source is unavailable at check time, this market will remain open until the leaderboard comes back online and will resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.
Resolver
0x69c47De9D...Anthropic's Claude Opus 4.7, released April 16, has driven near-unanimous trader consensus at 99% implied probability by dominating the LM Arena Code leaderboard—the market's resolution source—with a commanding lead in agentic coding on real-world tasks like building live websites and apps. The large language model delivered double-digit gains on benchmarks such as SWE-bench Verified (87.6%) and CursorBench (70%), excelling in multi-step planning, tool calls, and self-verification, outpacing OpenAI's GPT-5.4 and Google's Gemini 3.1 Pro by 37-46 Elo points. No rival updates have closed the gap in the past two weeks. Realistic challenges include a surprise pre-deadline submission from competitors like OpenAI that rapidly accrues superior user votes on the leaderboard.
Resumen experimental generado por IA con datos de Polymarket. Esto no es asesoramiento de trading y no influye en cómo se resuelve este mercado. · Actualizado
Cuidado con los enlaces externos.
Cuidado con los enlaces externos.
Preguntas frecuentes