The collapse of Anthropic's $200 million Department of Defense contract in March 2026—sparked by irreconcilable disputes over AI safety safeguards, including refusals to enable unrestricted military applications like autonomous weapons—has left trader sentiment evenly split on prospects for a renewed Pentagon deal. The DoD subsequently labeled Anthropic a supply chain risk, barring it from procurements, and pivoted to competitors: OpenAI inked an agreement in late February, while Google expanded access to its AI models just this week amid employee objections. Anthropic's vow to sue and ongoing provision of Claude models at nominal cost to national security users fuel hopes for reconciliation, but regulatory blacklisting and DoD diversification to mitigate vendor overreliance pose steep barriers. Watch for lawsuit filings or congressional hearings on AI governance as key catalysts.
Experimental AI-generated summary referencing Polymarket data. This is not trading advice and plays no role in how this market resolves. · Updated$138,204 Vol.
April 30
2%
May 31
46%
June 30
69%
$138,204 Vol.
April 30
2%
May 31
46%
June 30
69%
This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by May 31, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”.
A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count).
An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect.
Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period.
Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement.
The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.
Market Opened: Apr 27, 2026, 11:41 AM ET
Resolver
0x65070BE91...This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by May 31, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”.
A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count).
An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect.
Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period.
Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement.
The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.
Resolver
0x65070BE91...The collapse of Anthropic's $200 million Department of Defense contract in March 2026—sparked by irreconcilable disputes over AI safety safeguards, including refusals to enable unrestricted military applications like autonomous weapons—has left trader sentiment evenly split on prospects for a renewed Pentagon deal. The DoD subsequently labeled Anthropic a supply chain risk, barring it from procurements, and pivoted to competitors: OpenAI inked an agreement in late February, while Google expanded access to its AI models just this week amid employee objections. Anthropic's vow to sue and ongoing provision of Claude models at nominal cost to national security users fuel hopes for reconciliation, but regulatory blacklisting and DoD diversification to mitigate vendor overreliance pose steep barriers. Watch for lawsuit filings or congressional hearings on AI governance as key catalysts.
Experimental AI-generated summary referencing Polymarket data. This is not trading advice and plays no role in how this market resolves. · Updated


Beware of external links.
Beware of external links.
Frequently Asked Questions