Recent high-level talks between Anthropic CEO Dario Amodei and Trump administration officials, including chief of staff Susie Wiles, alongside President Trump's April statement deeming a Pentagon deal "possible," have thawed relations after months of conflict. The dispute erupted in February when the Department of Defense demanded Anthropic lift safeguards on its Claude frontier models prohibiting autonomous weapons and mass surveillance, leading to contract termination, a "supply chain risk" designation, and Anthropic's lawsuit alleging First Amendment violations—a label upheld by courts pending appeal. With rivals OpenAI and Google securing DoD contracts, traders weigh Anthropic's AI safety commitments against national security needs; key catalysts include lawsuit rulings and potential executive orders before June's resolution deadline.
Resumen experimental generado por IA con datos de Polymarket. Esto no es asesoramiento de trading y no influye en cómo se resuelve este mercado. · Actualizado$133,559 Vol.
30 de abril
2%
31 de mayo
37%
30 de junio
63%
$133,559 Vol.
30 de abril
2%
31 de mayo
37%
30 de junio
63%
This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by April 30, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”.
A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count).
An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect.
Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period.
Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement.
The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.
Mercado abierto: Mar 6, 2026, 1:33 PM ET
Resolver
0x65070BE91...This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by April 30, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”.
A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count).
An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect.
Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period.
Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement.
The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.
Resolver
0x65070BE91...Recent high-level talks between Anthropic CEO Dario Amodei and Trump administration officials, including chief of staff Susie Wiles, alongside President Trump's April statement deeming a Pentagon deal "possible," have thawed relations after months of conflict. The dispute erupted in February when the Department of Defense demanded Anthropic lift safeguards on its Claude frontier models prohibiting autonomous weapons and mass surveillance, leading to contract termination, a "supply chain risk" designation, and Anthropic's lawsuit alleging First Amendment violations—a label upheld by courts pending appeal. With rivals OpenAI and Google securing DoD contracts, traders weigh Anthropic's AI safety commitments against national security needs; key catalysts include lawsuit rulings and potential executive orders before June's resolution deadline.
Resumen experimental generado por IA con datos de Polymarket. Esto no es asesoramiento de trading y no influye en cómo se resuelve este mercado. · Actualizado
Cuidado con los enlaces externos.
Cuidado con los enlaces externos.
Preguntas frecuentes