The Commerce Department granted Samsung up to $6.4 billion in federal funding to increase chip manufacturing in central Texas, the department announced Monday. The two sides signed a “non-binding preliminary memorandum of terms” for direct funding under the Chips and Science Act (see 2208090062). Samsung expects to invest more than $40 billion and create more than 20,000 jobs in the region related to semiconductor production. The investment will “cement central Texas’s role as a state-of-the-art semiconductor ecosystem,” President Joe Biden said in a statement. Samsung will manufacture “important components to our most advanced technologies, from artificial intelligence to high-performance computing and 5G communications,” Commerce Secretary Gina Raimondo said. Samsung plans to build a “comprehensive advanced manufacturing ecosystem, ranging from leading-edge logic to advanced packaging to R&D” in Taylor, Texas, near Austin. The company plans to expand facilities in Austin to “support the production of leading fully depleted silicon-on-insulator process technologies for critical U.S. industries, including aerospace, defense, and automotive.” Strengthening local semiconductor production will position the U.S. as “a global semiconductor manufacturing destination,” said Kye Hyun Kyung, CEO of Samsung’s Device Solutions Division.
U.S. and European officials should continue sharing enforcement strategies to keep pace with the tech sector and AI-related matters, U.S. and EU competition enforcers said Wednesday. DOJ Antitrust Division Chief Jonathan Kanter, FTC Chair Lina Khan and European Commission Executive Vice President Margrethe Vestager met in Washington, D.C., for the fourth session of the U.S.-EU Joint Technology Competition Policy Dialogue. Khan said in a statement: “As businesses move at breakneck speed to build and monetize AI and algorithmic decision-making tools, engaging with our international partners and sharing best practices will be especially critical.” Kanter said the growth of “data monopolies” and AI’s “rapid expansion” increase competitive threats from “digital gatekeepers.” Vestager said the tech sector raises “global challenges” related to AI and cloud computing: “It is essential to anticipate and address such challenges through close cooperation, leveraging our respective experiences for the benefit of consumers and businesses on both sides of the Atlantic.”
The FTC on Friday denied the videogame industry’s request to use face-scanning technology to determine user ages under child privacy rules, the agency announced. The agency denied the request without prejudice to allow the National Institute of Standards and Technology to examine the proposed age-verification method. The Entertainment Software Rating Board, Yoti and SuperAwesome filed an application in June seeking FTC approval for the age-estimation technology that uses facial geometry (see 2401300018). The group on March 22 requested a stay on the decision to allow NIST time to analyze results. The commission voted 4-0 to deny the request without prejudice, meaning ESRB can refile after NIST completes its work. Newly seated Commissioner Melissa Holyoak (see 2403080038) participated in the unanimous vote. Andrew Ferguson, also recently confirmed, hasn’t taken office yet. ESRB said in a statement Monday it’s “disappointed” the FTC declined to issue a “substantive decision” and delay further an application the agency already twice postponed. “We remain hopeful that facial age estimation and other innovative technologies will be considered COPPA-compliant when used to obtain verifiable parental consent in the near future,” ESRB said, referring to the Children's Online Privacy Protection Act, the statutory basis for the application.
Federal agencies have 60 days to designate a chief AI officer who will be responsible for ensuring the government is minimizing AI-related impacts on civil rights and safety, Vice President Kamala Harris said Thursday. Harris announced OMB policies designed to protect against risks like bias and discrimination. The White House sent a memorandum to all executive departments and agencies, as directed under President Joe Biden’s AI executive order. It applies to civilian and military agencies, but there are exceptions and waivers for national security and law enforcement. The memo lays out safeguards, including impact assessments, that agencies must adopt when using AI. OMB said AI systems are “rights-impacting” if there’s a “legal, material, binding or similarly significant effect” on rights. In addition, agencies would have to apply specific transparency standards including the publication of AI uses and justifications. The memo grants travelers the right to opt out of airport facial recognition systems that the Transportation Security Administration controls. Exceptions and waivers for national security, intelligence and law enforcement could “significantly undercut” the document’s intentions, the American Civil Liberties Union said in a statement Thursday. Harmful and discriminatory uses of AI by national security agencies and state agencies “remain largely unchecked,” ACLU Senior Policy Counsel Cody Venzke said. The ALCU highlighted risks associated with law enforcement’s use of algorithmic systems like facial recognition and predictive policing systems, which produce “harmful results.” The Center for Democracy & Technology credited OMB for following stakeholder recommendations about improving agency transparency and government procurement of AI services. CDT said the White House missed an opportunity for establishing data minimization standards. Moreover, the administration should have provided a redress process if a chief AI officer “inappropriately grants a waiver,” CDT added. There’s “no recourse to challenge the validity of the decision to exempt AI uses.”
Kentucky Gov. Andy Beshear (D) should veto a “weak” data privacy bill the House approved Wednesday, Consumer Reports said Thursday. The House passed HB-15 with a 94-0 vote. The Senate vote was 35-0 on March 11. The bill would grant consumer rights to access, correct and delete data and allow them to opt out of targeted advertising and sale of data. Kentucky's attorney general would have sole authority to penalize offenders under HR-15, which would go into effect in January 2026 if enacted. Consumer Reports Policy Analyst Matt Schwartz called HB-15 an “industry bill,” saying it “offers almost no new substantive limitations on how companies collect or process data.” The bill is similar to Virginia’s privacy law but lacks kids’ privacy protections the commonwealth added this year, Husch Blackwell’s David Stauss said in a blog post Thursday. The Kentucky bill treats biometric data similar to a privacy law in Connecticut, he said: Video, audio and related data isn’t considered biometric data “unless it is used to identify a specific individual.” Kentucky would become the 15th state to pass a comprehensive privacy law if Beshear signs.
The Cybersecurity and Infrastructure Security Agency on Wednesday proposed using subpoena authority to ensure companies are complying with cyber incident reporting mandates. CISA issued an NPRM laying out rules for requirements under the 2022 Cyber Incident Reporting for Critical Infrastructure Act (see 2211290071). CISA proposed referring noncompliant entities to DOJ for civil penalties if they fail to produce requested information on incidents. The NPRM is scheduled for Federal Register publication April 4. Comments are due June 3.
Agencies need technologists, data scientists and other tech experts to properly regulate markets associated with AI, machine learning and augmented reality, the FTC and DOJ said in a joint statement Tuesday with 23 other international enforcement agencies, including the European Commission’s Directorate-General for Competition, the U.K. Competition and Markets Authority, the Japan Fair Trade Commission and the Turkish Competition Authority. Increasing “digitization of economies around the world require[s] a greater level of expertise in order to assess the behavior of companies and the ability to weigh potential benefits and risks of technology,” the FTC said.
Qualcomm's ending its bid to buy Autotalks will help preserve competition and innovation, FTC Competition Bureau Director Henry Liu said Monday. Qualcomm reportedly reached a deal for the Israeli chip manufacturer, valued at an estimated $350 million. Abandoning it will benefit consumers in the market for vehicle-to-everything (V2X) “chipsets and related products used in automotive safety systems,” said Liu. “This is a win for car buyers seeking quality, affordable cars with V2X communication capabilities that promise to make driving easier and safer.” The European Commission announced in August plans to scrutinize the deal in response to requests from 15 member states. Qualcomm said in a statement Monday it exited the deal "due to lack of regulatory approvals in a timely manner. Automotive is a very important vertical for Qualcomm, and we remain fully committed to our product roadmap, our customers and our partners."
NTIA shouldn’t rely solely on national security agencies when assessing the benefits and risks of open AI models, a wide range of advocates wrote Commerce Secretary Gina Raimondo on Monday. NTIA’s public consultation (see 2402230039) should go through a “robust” interagency review with input from agencies focused on competition, civil rights and scientific research, “not just the agencies that oversee national security,” the groups said. The Center for Democracy & Technology, the Chamber of Progress, the Electronic Frontier Foundation, Engine, Fight for the Future, the Information Technology and Innovation Foundation, Mozilla, the Open Technology Institute, Public Knowledge and R Street Institute signed the letter.
Pennsylvania House members approved legislation Tuesday that would establish age-verification and content-flagging requirements for social media companies. The House Consumer Protection Committee advanced HB-2017 to the floor with a 20-4 vote. Four Republicans voted against, citing privacy and free speech concerns. Introduced by Rep. Brian Munroe (D), the bill would grant the attorney general sole authority to impose penalties against platforms that fail to gain proper age verification and parental consent or fail to flag harmful content for parents. The committee removed a private right of action from the legislation during Tuesday’s markup. Munroe said the bill requires platforms to strengthen age verification by requiring consent from a parent or legal guardian. It also requires that they monitor chats and notify parents of sensitive or graphic content. Once notified, parents can correct the problem, said Rep. Craig Williams (R). Rep. Lisa Borowski (D) called the bill a “small step” toward better protecting young people. Rep. Joe Hogan (R) said legislation shouldn’t increase Big Tech's control over what’s permissible speech, citing data abuse from TikTok. He voted against the bill with fellow Republicans, Reps. Abby Major, Jason Ortitay and Alec Ryncavage. The Computer & Communications Industry Association urged legislators to reject the proposal, saying increased data collection requirements create privacy issues, restrict First Amendment rights and conflict with data minimization principles.