Potential US Export Controls on AI Could Cause Problems, Think Tank Says
The U.S. government could face a host of challenges if it tries to place export controls on AI models to protect national security, the Center for European Policy Analysis (CEPA) said in an article last week.
Sign up for a free preview to unlock the rest of this article
If your job depends on informed compliance, you need International Trade Today. Delivered every business day and available any time online, only International Trade Today helps you stay current on the increasingly complex international trade regulatory environment.
Concerns likely would be raised about impeding international collaboration, ceding U.S. leadership, and violating First Amendment protections for computer source code, the article says. Questions could also be asked about the feasibility of enforcing such controls.
“The U.S. and Europe are finding it difficult enough to halt the flow of physical products -- a plethora of export controls are not stopping Russia from using Western chips to aim its missiles and drones at Ukrainian civilians,” CEPA wrote. “How can the U.S. restrict the movement of software bits and bytes, particularly if they are open source?”
Although the U.S. government hasn't yet imposed AI controls, CEPA said proposed reporting rules could lead to restrictions. The reporting rules, which the Bureau of Industry and Security unveiled last month, would require developers of advanced AI models and computing clusters to submit information about their activities to the agency (see 2409090012).
"While not export controls, the proposed reporting rules would give the US government data needed to determine which AI models to restrict," CEPA said.
If the government does indeed pursue export controls, it would have to determine whether they would apply to open or closed models and whether they would be based on models’ compute power or capabilities, CEPA said.
The article was written by Matthew Eitel, special assistant to CEPA’s president and CEO.