House Members Look to Define AI Framework
Congress needs to identify an AI regulatory framework so companies like Facebook can be held accountable for biases and side effects associated with algorithms, said House AI Task Force Chairman Bill Foster, D-Ill., during a hearing Wednesday.
Sign up for a free preview to unlock the rest of this article
If your job depends on informed compliance, you need International Trade Today. Delivered every business day and available any time online, only International Trade Today helps you stay current on the increasingly complex international trade regulatory environment.
It’s important for Congress to promote innovation while ensuring AI is transparent and ethical, said ranking member Anthony Gonzalez, R-Ohio. These technologies aren’t perfect because they’re created by humans, and Congress needs to be careful of overregulating, he added: Higher regulatory burden means further entrenchment for incumbents and less innovation from startups.
“Part of me” believes Congress needs to determine the cause and effect of algorithmic inputs and results, said Rep. Jake Auchincloss, D-Mass. He discussed testimony from Facebook whistleblower Frances Haugen, who opposed revisions of Communications Decency Act Section 230 for user-generated content (see 2110050062). Haugen says companies should be held liable for the algorithms themselves and the real-world impacts, Auchincloss said. He agreed companies need to be liable for the algorithms. Facebook didn’t comment.
Senate Commerce Committee Chair Maria Cantwell, D-Wash., urged Facebook Tuesday to preserve all documents and data about the internal research discussed at the hearing. The committee is exploring subpoena options, she told us last week (see 2110060077). “Haugen’s testimony that Facebook’s algorithms incentivize angry and divisive content on its platforms is chilling,” she said in a statement Tuesday.
There should be a balance of regulation and internal company accountability, said BSA|The Software Alliance Vice President-Global Policy Aaron Cooper. Senior-level officials should sign off on algorithmic decisions, and there should be documentation of why certain decisions are made, he said. If Congress can’t determine causality from the algorithms, companies can distance themselves from accountability and evade responsibility, said Auchincloss.
Facebook’s AI objective is to maximize profit, said Foster. This approach has been killing off “all rational political debate,” which interferes with Facebook’s profit, he added. He supported restraints for AI that are stricter than those for human decision makers. Gonzalez argued for more innovation, which he said can fix technological issues. What’s illegal in the real world should be illegal in the digital world, said Cooper, citing discriminatory practices.
Part of the discussion was on transparency about the outputs of AI and “opening the hood” to examine inputs. It’s not one or the other, said EqualAI CEO Miriam Vogel: Elements “under the hood” help to understand the sensitive functions of AI. It’s an ever-moving target, said Wilson Center Science and Technology Innovation Program Director Meg King: Is an appropriate amount of data being collected, particularly with children?
AI input can exacerbate racial bias, said Rep. Ayanna Pressley, D-Mass. New York University associate professor of journalism Meredith Broussard agreed, saying Silicon Valley decision makers are mostly “pale, male and Yale.” They embed biases in the technology, so the AI has collective blind spots, said Broussard.
Algorithms shouldn’t be black boxes, said Rep. Barry Loudermilk, R-Ga.: There should be a record of every input. Without record-keeping, there's no accountability, said Bank for International Settlements Financial Stability Institute Principal Adviser Jeffrey Yong: If the AI model isn’t transparent, it’s hard to determine whether it’s sound or biased. Vogel agreed AI should be designed for a broader swath of society so everyone can benefit.