International Trade Today is a Warren News publication.
'Heart' of Autonomous Driving

Google to Focus on Machine Learning Errors and 'Extraordinarily' Bad Results

Artificial intelligence functions properly through machine learning most of the time, but when it’s wrong, results can be “extraordinarily” bad, and this deserves attention, said Google Legal Director Kenneth Rubenstein Friday. He spoke on a panel at an Association of National Advertisers event about AI and autonomous driving.

Sign up for a free preview to unlock the rest of this article

If your job depends on informed compliance, you need International Trade Today. Delivered every business day and available any time online, only International Trade Today helps you stay current on the increasingly complex international trade regulatory environment.

Google has offered some bad examples of machine learning gone wrong, said Rubenstein, noting the field of development is at the “heart” of autonomous driving. Google's sister Alphabet subsidiary Waymo plans to launch the first fleet of autonomous ride-hail vehicles this year in the Phoenix area (see 1803130053). A Google self-driving vehicle caused its first accident two years ago, when an autonomous Lexus drove around sandbags in the roadway but failed to yield to an oncoming bus in an adjacent lane. The two vehicles collided at low speeds, and no one was injured.

The hardest part of autonomous driving isn't the driving, but the unpredictable manner in which people, objects and aggressive drivers can create scenarios that AI can’t always account for, said Rubenstein. Self-driving cars could function better if all vehicles on the road were autonomous, he said, and autonomous flying is easier because there's less traffic to anticipate. Google autonomous-driving systems are trained using virtual cities, in which the computers are fed data, he said. The more data gathered, the smarter the machine, Rubenstein said.

EU's general data protection regulation (see 1803090045) is causing some consternation. World Federation of Advertisers Senior Manager Public Affairs-Digital Governance Exchange Catherine Armitage called GDPR, which takes effect May 25, the beginning of a shift in the way regulators and consumers view data. Across the world, countries are using the GDPR as a model for more stringent data laws. China, for example, is implementing its own data law in May, she said. On the risks associated with noncompliance, most companies are worried about reputation, loss of consumer trust and brand perception, Armitage said.

Asked how disruptive the GDPR will be, Rubenstein would like to give a definitive answer, but it’s a brand new law, and there hasn’t been “a lot of guidance” on what defines data consent and notice. All Google can do is make its best effort with what it thinks the law wants it to do, he said.

On an IoT panel, Loeb & Loeb media and technology lawyer Nathan Hole said projections have varied widely, but the number of smart devices in U.S. homes is expected to grow to 244 million in five years. The FTC has been inactive on device-related privacy enforcement here, but has provided guidance, he said. He cited the agency’s decision in June to update its Children’s Online Privacy Protection rule to reflect the growing number of internet-connected toys and other devices. The agency also issued an enforcement policy statement for COPPA in October and had a workshop on informational injury in December. This all reflects the FTC’s shifting role as it adapts to IoT growth, said Hole.