Skip to main content Skip to secondary navigation

Policy and Legal Considerations

Main content start

[go to the annotated version]

While a comprehensive examination of the ways artificial intelligence (AI) interacts with the law is beyond the scope of this inaugural report, this much seems clear: as a transformative technology, AI has the potential to challenge any number of legal assumptions in the short, medium, and long term. Precisely how law and policy will adapt to advances in AI—and how AI will adapt to values reflected in law and policy—depends on a variety of social, cultural, economic, and other factors, and is likely to vary by jurisdiction.

American law represents a mixture of common law, federal, state, and local statutes and ordinances, and—perhaps of greatest relevance to AI—regulations. Depending on its instantiation, AI could implicate each of these sources of law. For example, Nevada passed a law broadly permitting autonomous vehicles and instructed the Nevada Department of Motor Vehicles to craft requirements. Meanwhile, the National Highway Transportation Safety Administration has determined that a self-driving car system, rather than the vehicle occupants, can be considered the “driver” of a vehicle. Some car designs sidestep this issue by staying in autonomous mode only when hands are on the wheel (at least every so often), so that the human driver has ultimate control and responsibility. Still, Tesla's adoption of this strategy did not prevent the first traffic fatality involving an autonomous car, which occurred in June of 2016. Such incidents are sure to influence public attitudes towards autonomous driving. And as most people's first experience with embodied agents, autonomous transportation will strongly influence the public's perception of AI.

Driverless cars are, of course, but one example of the many instantiations of AI in services, products, and other contexts. The legal effect of introducing AI into the provision of tax advice, automated trading on the stock market, or generating medical diagnoses will also vary in accordance to the regulators that govern these contexts and the rules that apply within them. Many other examples of AI applications fall within current non-technology-specific policy, including predictive policing, non-discriminatory loans, healthcare applications such as eldercare and drug delivery, systems designed to interact with children (for example, autonomous tutoring systems are required to respect laws in regard to balanced handling of evolution vs. intelligent design), and interactive entertainment.

Given the present structure of American administrative law, it seems unlikely that AI will be treated comprehensively in the near term. Nevertheless, it is possible to enumerate broad categories of legal and policy issues that AI tends to raise in various contexts.

Privacy

Private information about an individual can be revealed through decisions and predictions made by AI. While some of the ways that AI implicates privacy mirror those of technologies such as computers and the internet, other issues may be unique to AI. For example, the potential of AI to predict future behavior based on previous patterns raises challenging questions. Companies already use machine learning to predict credit risk. And states run prisoner details through complex algorithms to predict the likelihood of recidivism when considering parole. In these cases, it is a technical challenge to ensure that factors such as race and sexual orientation are not being used to inform AI-based decisions. Even when such features are not directly provided to the algorithms, they may still correlate strongly with seemingly innocuous features such as zip code. Nonetheless, with careful design, testing, and deployment, AI algorithms may be able to make less biased decisions than a typical person.

Anthropomorphic interfaces increasingly associated with AI raise novel privacy concerns. Social science research suggests people are hardwired to respond to anthropomorphic technology as though it were human. Subjects in one study were more likely to answer when they were born if the computer first stated when it was built.[132] In another, they skipped sensitive questions when posed by an anthropomorphic interface.[133] At a basic level lies the question: Will humans continue to enjoy the prospect of solitude in a world permeated by apparently social agents “living” in our houses, cars, offices, hospital rooms, and phones?[134]

Innovation policy

Early law and policy decisions concerning liability and speech helped ensure the commercial viability of the Internet. By contrast, the software industry arguably suffers today from the decision of firms to pivot from open and free software to the more aggressive pursuit of intellectual property protections, resulting in what some have termed patent “thickets.” Striking the proper balance between incentivizing innovation in AI while promoting cooperation and protection against third party harm will prove a central challenge.

Liability (civil)

As AI is organized to directly affect the world, even physically, liability for harms caused by AI will increase in salience. The prospect that AI will behave in ways designers do not expect challenges the prevailing assumption within tort law that courts only compensate for foreseeable injuries. Courts might arbitrarily assign liability to a human actor even when liability is better located elsewhere for reasons of fairness or efficiency. Alternatively, courts could refuse to find liability because the defendant before the court did not, and could not, foresee the harm that the AI caused. Liability would then fall by default on the blameless victim. The role of product liability—and the responsibility that falls to companies manufacturing these products— will likely grow when human actors become less responsible for the actions of a machine.

Liability (criminal)

If tort law expects harms to be foreseeable, criminal law goes further to expect that harms be intended. US law in particular attaches great importance to the concept of mens rea—the intending mind. As AI applications engage in behavior that, were it done by human, would constitute a crime, courts and other legal actors will have to puzzle through whom to hold accountable and on what theory.

Agency

The above issues raise the question of whether and under what circumstances an AI system could operate as the agent of a person or corporation. Already regulatory bodies in the United States, Canada, and elsewhere are setting the conditions under which software can enter into a binding contract.[135] The more AI conducts legally salient activities, the greater the challenge to principles of agency under the law.

Certification

The very notion of “artificial intelligence” suggests a substitution for human skill and ingenuity. And in many contexts, ranging from driving to performing surgery or practicing law, a human must attain some certification or license before performing a given task. Accordingly, law and policy will have to—and already does—grapple with how to determine competency in an AI system. For example, imagine a robotics company creates a surgical platform capable of autonomously removing an appendix. Or imagine a law firm writes an application capable of rendering legal advice. Today, it is unclear from a legal perspective who in this picture would have to pass the medical boards or legal bar, let alone where they would be required to do so.[136]

Labor

As AI substitutes for human roles, some jobs will be eliminated and new jobs will be created. The net effect on jobs is ambiguous, but labor markets are unlikely to benefit everyone evenly. The demand for some types of skills or abilities will likely drop significantly, negatively affecting the employment levels and wages of people with those skills.[137] While the ultimate effects on income levels and distribution are not inevitable, they depend substantially on government policies, on the way companies choose to organize work, and on decisions by individuals to invest in learning new skills and seeking new types of work and income opportunities. People who find their employment altered or terminated as a consequence of advances of AI may seek recourse in the legislature and courts. This may be why Littler Mendelson LLP—perhaps the largest employment law firm in the world—has an entire practice group to address robotics and artificial intelligence.

Taxation

Federal, state, and local revenue sources may be affected. Accomplishing a task using AI instead of a person can be faster and more accurate—and avoid employment taxes. As a result, AI could increasingly shift investment from payroll and income to capital expenditure. Depending on a state budget’s reliance on payroll and income tax, such a shift could be destabilizing. AI may also display different “habits” than people, resulting in still fewer revenue sources. The many municipalities relying on income from speeding or parking tickets will have to find alternatives if autonomous cars can drop people off and find distance parking, or if they are programmed not to violate the law. As a result, government bodies trying to balance their budgets in light of advances in AI may pass legislation to slow down or alter the course of the technology.

Politics

AI technologies are already being used by political actors in gerrymandering and targeted “robocalls” designed to suppress votes, and on social media platforms in the form of “bots.”[138] They can enable coordinated protest as well as the ability to predict protests, and promote greater transparency in politics by more accurately pinpointing who said what, when. Thus, administrative and regulatory laws regarding AI can be designed to promote greater democratic participation or, if ill-conceived, to reduce it.

This list is not exhaustive and focuses largely on domestic policy in the United States, leaving out many areas of law that AI is likely to touch. One lesson that might be drawn concerns the growing disconnect between the context-specific way in which AI is governed today and a wider consideration of themes shared by AI technologies across industries or sectors of society. It could be tempting to create new institutional configurations capable of amassing expertise and setting AI standards across multiple contexts. The Study Panel’s consensus is that attempts to regulate “AI” in general would be misguided, since there is no clear definition of AI (it isn't any one thing), and the risks and considerations are very different in different domains. Instead, policymakers should recognize that to varying degrees and over time, various industries will need distinct, appropriate, regulations that touch on software built using AI or incorporating AI in some way. The government will need the expertise to scrutinize standards and technology developed by the private and public sector, and to craft regulations where necessary.

 


[132] Youngme Moon, “Intimate Exchanges: Using Computers to Elicit Self-Disclosure from Consumers,” Journal of Consumer Research 26, no. 4 (March 2000): 323-339.

[133] Lee Sproull, Mani Subramani, Sara Kiesler , Janet H. Walker, and Keith Waters, “When the Interface is a Face,” Human-Computer Interaction 11, no. 2 (1996): 97-124.

[134] M. Ryan Calo, “People Can Be So Fake: A New Dimension to Privacy and Technology Scholarship,” Penn State Law Review 114, no. 3 (2010): 809-855.

[135] Ian R. Kerr, “Ensuring the Success of Contract Formation in Agent-Mediated Electronic Commerce,” Electronic Commerce Research 1 (2001): 183-202.

[136] Ryan Calo, "Digital Agenda’s public discussion on ‘The effects of robotics on economics, labour and society,'" Ausschuss Digitale Agenda, (Deutsche Bundestag: Ausschussdrucksache A-Drs.18(24)102), June 22, 2016, accessed August 1, 2016, https://www.bundestag.de/blob/428266/195a1cde8d5347849accbbe60ed91865/a-drs-18-24-102-data.pdf.

[137] Brynjolfsson and McAfee, “Race Against the Machine: How the Digital Revolution is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy (2011)”; Brynjolfsson and McAfee, Second Machine Age.

[138] Political Bots, accessed August 1, 2016, http://politicalbots.org/.

Cite This Report

Peter Stone, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, Oren Etzioni, Greg Hager, Julia Hirschberg, Shivaram Kalyanakrishnan, Ece Kamar, Sarit Kraus, Kevin Leyton-Brown, David Parkes, William Press, AnnaLee Saxenian, Julie Shah, Milind Tambe, and Astro Teller.  "Artificial Intelligence and Life in 2030." One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel, Stanford University, Stanford, CA,  September 2016. Doc: http://ai100.stanford.edu/2016-report. Accessed:  September 6, 2016.

Report Authors

AI100 Standing Committee and Study Panel 

Copyright

© 2016 by Stanford University. Artificial Intelligence and Life in 2030 is made available under a Creative Commons Attribution-NoDerivatives 4.0 License (International): https://creativecommons.org/licenses/by-nd/4.0/.