Americans for Responsible Innovation, a nonprofit focused on AI policy, is pushing the Trump administration to require mandatory security reviews for any AI lab wanting to take a chance on U.S. government contracts. The recommendation, made on May 11, 2026, targets developers who build so-called “frontier” models, the most powerful and potentially dangerous AI systems in existence.
ARI’s proposal focuses on structured security assessments that would serve as a monitoring mechanism for federal procurement. An AI lab building pioneering models would need to demonstrate that its systems have been screened for potential misuse before qualifying for a government contract.
On April 6, 2026, the group warned the General Services Administration of the risks inherent in vague “any lawful use” clauses found in existing AI procurement regulations. According to ARI, these clauses essentially give AI systems a blank check to operate without meaningful guardrails once they are inside government systems.
The ARI highlights a 4.2x annual growth rate in AI computing since 2010, a trajectory that suggests the capabilities of frontier models are growing far faster than the government’s ability to evaluate them. Meanwhile, 82% of the public does not trust technology leaders to regulate AI themselves.
On March 24, 2026, the CFTC announced the creation of a task force specifically aimed at regulating the role of AI in digital assets. ARI is relatively new to the AI regulation scene and the group has not made explicit connections to cryptocurrency or digital assets in its recommendations.
If mandatory security reviews become a reality, the most immediate impact will fall on AI companies that earn significant revenue from government contracts. Compliance costs would increase. The deadlines for obtaining contracts would lengthen. Smaller AI startups, which often lack the resources to conduct thorough security audits, could find themselves shut out of the federal market altogether.
The CFTC’s Task Force on AI and Digital Assets suggests that regulators are already thinking about how AI governance intersects with blockchain-based systems. Investors in tokens and AI-related projects should pay attention to how this proposal is received by the administration and Congress, as security mandates gaining political momentum would force the market to factor in higher compliance costs across the AI sector, including companies building at the intersection of AI and crypto.


