Trader consensus assigns a 100% implied probability to Anthropic dominating the best Math AI model by April's end, driven by Claude Opus 4.7's top ranking on the Chatbot Arena LLM Leaderboard's Text Arena Math category following its April 16 release. This crowd-sourced evaluation, prioritizing battle-tested math reasoning without style control, shows Claude outperforming OpenAI's GPT-5.4 and Google's Gemini 3.1 Pro, bolstered by verified 96.25% AIME scores and strong MATH benchmark results amid no rival announcements in the past week. While resolution at noon ET April 30 locks in the snapshot—ranked by Arena score then alphabetical tiebreaker—scenarios like last-minute user voting surges or undisclosed model tweaks could challenge it, though traders dismiss such risks given the decisive lead.
Resumen experimental generado por IA con datos de Polymarket. Esto no es asesoramiento de trading y no influye en cómo se resuelve este mercado. · Actualizado¿Qué empresa tiene el mejor modelo de IA matemática a finales de abril?
¿Qué empresa tiene el mejor modelo de IA matemática a finales de abril?
Anthropic 100.0%
OpenAI <1%
xAI <1%
DeepSeek <1%
$697,618 Vol.
$697,618 Vol.

Anthropic
100%

OpenAI
<1%

xAI
<1%

DeepSeek
<1%

Amazon
<1%

Z.ai
<1%

Xiaomi
<1%

ByteDance
<1%

Meituan
<1%

<1%

Alibaba
<1%

Baidu
<1%

Moonshot
<1%

Mistral
<1%
Anthropic 100.0%
OpenAI <1%
xAI <1%
DeepSeek <1%
$697,618 Vol.
$697,618 Vol.

Anthropic
100%

OpenAI
<1%

xAI
<1%

DeepSeek
<1%

Amazon
<1%

Z.ai
<1%

Xiaomi
<1%

ByteDance
<1%

Meituan
<1%

<1%

Alibaba
<1%

Baidu
<1%

Moonshot
<1%

Mistral
<1%
Results from the "Rank" column under the "Text Arena | Math" Leaderboard tab at https://arena.ai/leaderboard/text/math-no-style-control with style control off will be used to resolve this market.
Models will be ordered primarily by their leaderboard rank at the market’s check time. If two or more models are tied on rank, they will be ordered by their Arena score, including any underlying, unrounded, granular values reflected in the data below the leaderboard. If a tie remains, alphabetical order of company names as listed in this market group will be used as a final tiebreaker (e.g., if the two models are tied by exact arena score, “Google” would be ranked ahead of “xAI”). This market will resolve based on the company that occupies first place under this ranking system.
The resolution source for this market is the Chatbot Arena LLM Leaderboard found at https://lmarena.ai/. If this resolution source is unavailable at check time, this market will remain open until the leaderboard comes back online and will resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.
Mercado abierto: Apr 2, 2026, 5:48 PM ET
Resolver
0x69c47De9D...Results from the "Rank" column under the "Text Arena | Math" Leaderboard tab at https://arena.ai/leaderboard/text/math-no-style-control with style control off will be used to resolve this market.
Models will be ordered primarily by their leaderboard rank at the market’s check time. If two or more models are tied on rank, they will be ordered by their Arena score, including any underlying, unrounded, granular values reflected in the data below the leaderboard. If a tie remains, alphabetical order of company names as listed in this market group will be used as a final tiebreaker (e.g., if the two models are tied by exact arena score, “Google” would be ranked ahead of “xAI”). This market will resolve based on the company that occupies first place under this ranking system.
The resolution source for this market is the Chatbot Arena LLM Leaderboard found at https://lmarena.ai/. If this resolution source is unavailable at check time, this market will remain open until the leaderboard comes back online and will resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.
Resolver
0x69c47De9D...Trader consensus assigns a 100% implied probability to Anthropic dominating the best Math AI model by April's end, driven by Claude Opus 4.7's top ranking on the Chatbot Arena LLM Leaderboard's Text Arena Math category following its April 16 release. This crowd-sourced evaluation, prioritizing battle-tested math reasoning without style control, shows Claude outperforming OpenAI's GPT-5.4 and Google's Gemini 3.1 Pro, bolstered by verified 96.25% AIME scores and strong MATH benchmark results amid no rival announcements in the past week. While resolution at noon ET April 30 locks in the snapshot—ranked by Arena score then alphabetical tiebreaker—scenarios like last-minute user voting surges or undisclosed model tweaks could challenge it, though traders dismiss such risks given the decisive lead.
Resumen experimental generado por IA con datos de Polymarket. Esto no es asesoramiento de trading y no influye en cómo se resuelve este mercado. · Actualizado
Cuidado con los enlaces externos.
Cuidado con los enlaces externos.
Preguntas frecuentes