Anthropic's Claude Opus 4.7 release on April 16 has driven near-unanimous trader consensus at 100% implied probability for holding the top spot on the LM Arena Coding leaderboard by April 30, propelled by massive benchmark gains including 87.6% on SWE-bench Verified—up nearly 7 points from Opus 4.6—and commanding leads in Code Arena with +46 Elo points over the next non-Anthropic model like GLM-5.1. This agentic coding prowess, emphasizing real-world software engineering tasks, has solidified its competitive edge amid tight races elsewhere. With resolution imminent at noon ET today, only a shock last-minute model drop from OpenAI's GPT-5.5, Google's Gemini 3.1 Pro, or a Chinese contender like DeepSeek could disrupt, though timelines make it improbable.
Experimental AI-generated summary referencing Polymarket data. This is not trading advice and plays no role in how this market resolves. · UpdatedAnthropic 100.0%
xAI <1%
ByteDance <1%
Baidu <1%
$264,317 Vol.
$264,317 Vol.

Anthropic
100%

xAI
<1%

ByteDance
<1%

Baidu
<1%

OpenAI
<1%

Alibaba
<1%

Moonshot
<1%

DeepSeek
<1%

Mistral
<1%

Meituan
<1%

<1%

Amazon
<1%

Z.ai
<1%
Anthropic 100.0%
xAI <1%
ByteDance <1%
Baidu <1%
$264,317 Vol.
$264,317 Vol.

Anthropic
100%

xAI
<1%

ByteDance
<1%

Baidu
<1%

OpenAI
<1%

Alibaba
<1%

Moonshot
<1%

DeepSeek
<1%

Mistral
<1%

Meituan
<1%

<1%

Amazon
<1%

Z.ai
<1%
Results from the "Rank" column under the "Text Arena | Coding" Leaderboard tab at https://arena.ai/leaderboard/text/coding-no-style-control with style control off will be used to resolve this market.
Models will be ordered primarily by their leaderboard rank at the market’s check time. If two or more models are tied on rank, they will be ordered by their Arena score, including any underlying, unrounded, granular values reflected in the data below the leaderboard. If a tie remains, alphabetical order of company names as listed in this market group will be used as a final tiebreaker (e.g., if the two models are tied by exact arena score, “Google” would be ranked ahead of “xAI”). This market will resolve based on the company that occupies first place under this ranking system.
The resolution source for this market is the Chatbot Arena LLM Leaderboard found at https://lmarena.ai/. If this resolution source is unavailable at check time, this market will remain open until the leaderboard comes back online and will resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.
Market Opened: Apr 2, 2026, 5:39 PM ET
Resolver
0x69c47De9D...Results from the "Rank" column under the "Text Arena | Coding" Leaderboard tab at https://arena.ai/leaderboard/text/coding-no-style-control with style control off will be used to resolve this market.
Models will be ordered primarily by their leaderboard rank at the market’s check time. If two or more models are tied on rank, they will be ordered by their Arena score, including any underlying, unrounded, granular values reflected in the data below the leaderboard. If a tie remains, alphabetical order of company names as listed in this market group will be used as a final tiebreaker (e.g., if the two models are tied by exact arena score, “Google” would be ranked ahead of “xAI”). This market will resolve based on the company that occupies first place under this ranking system.
The resolution source for this market is the Chatbot Arena LLM Leaderboard found at https://lmarena.ai/. If this resolution source is unavailable at check time, this market will remain open until the leaderboard comes back online and will resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.
Resolver
0x69c47De9D...Anthropic's Claude Opus 4.7 release on April 16 has driven near-unanimous trader consensus at 100% implied probability for holding the top spot on the LM Arena Coding leaderboard by April 30, propelled by massive benchmark gains including 87.6% on SWE-bench Verified—up nearly 7 points from Opus 4.6—and commanding leads in Code Arena with +46 Elo points over the next non-Anthropic model like GLM-5.1. This agentic coding prowess, emphasizing real-world software engineering tasks, has solidified its competitive edge amid tight races elsewhere. With resolution imminent at noon ET today, only a shock last-minute model drop from OpenAI's GPT-5.5, Google's Gemini 3.1 Pro, or a Chinese contender like DeepSeek could disrupt, though timelines make it improbable.
Experimental AI-generated summary referencing Polymarket data. This is not trading advice and plays no role in how this market resolves. · Updated



Beware of external links.
Beware of external links.
Frequently Asked Questions