By clicking “Accept All Cookies,” you agree to the storing of cookies on your device to enhance site navigation and analyze site usage.

Skip to main content

Rules for Robots: The Path to Effective AI Regulation

June 12, 2019

Hadfield

Professor Gillian Hadfield is diving into the fray over AI regulations. At a recent MIT IDE seminar, the University of Toronto Professor of Law and of Strategic Management said that “AI’s promises cannot be realized without regulation to ensure that it is built and deployed in ways that are responsive to our publicly set goals for humans and the planet.”

But it certainly won’t be easy to achieve.

In fact, Hadfield (pictured, above) said two factors are colliding “to put at serious risk our ability to accomplish those goals.” One is the tremendous speed with which AI is advancing and spreading into just about every facet of the economy and society. The other “is the extreme resistance and structural barriers to change and innovation in our legal and policy systems.”

Hadfield, who has written extensively and studied the relationship between law, technology, and economics, noted in her talk, Rules for Robots: Building Effective Regulation for Artificial Intelligence,  that tensions are building quickly as AI and machine learning advances ramp up. Specifically, there’s lots of debate globally over how to regulate AI. This past year, attention has focused on facial recognition, privacy, and data access. Hadfield pointed to the 2018 General Data Protection Regulation (GDPR) adoption in Europe, and public statements by Facebook and Amazon executives to “self-regulate” their services as examples of a shifting environment. For its part, Google hasn’t been able to launch its internal ethics council so far.

 Hadfield believes that regulatory oversight is needed, and that much of the current discussions diminish the complexity of the issue. Basing new regulations on old structures and assumptions is insufficient, she said. What’s needed is a dual approach to addressing these challenges: First, build AI systems that can interact with human norms, rules, and law.  Then, build “a novel regulatory structure—third-party regulatory markets—to spur the development and deployment of innovative regulatory technologies that can keep up with the speed and complexity of advances in AI.” 

These overhauls may take us into new territory. For instance, AI and machine learning may reveal the imperfections of human reasoning as well as the incompleteness of many legal contracts, Hadfield wrote in a previous research paper. Additionally, machines may be able to anticipate previously unanticipated behavior and see patterns humans don’t see–a basic part of what machine learning is achieving. “Surprises are baked into [machine learning]. It is different than conventional code,” she told the MIT seminar attendees. Previous software was totally controlled by programmers, while ML is becoming totally machine-driven. “Humans give the parameters; machines build their own models and programs.” In this environment, new legal definitions and social norms have to be established to ensure privacy, access, and fair use of AI technology.

Discussion will continue to focus on  trade secrecy and corporate information access. Ultimately, she said, the toughest question to consider is a larger one: What do we want the world to look like? That’s a conversation we need to be spending a lot more time on, too.

 

Gillian Hadfield is also Faculty Affiliate at the Vector Institute for Artificial Intelligence in Toronto and at the Center for Human-Compatible AI at the University of California Berkeley. She is also Senior Policy Advisor at OpenAI in San Francisco. Her book, Rules for a Flat World: Why Humans Invented Law and How to Reinvent It for a Complex Global Economywas published by Oxford University Press in 2017.

Read  her full bio here.