Back

Nudging robots: Innovative solutions to regulate artificial intelligence

Accepted in Vanderbilt Journal of Entertainment & Technology Law. There is a pervading sense of unease that artificially intelligent machines will soon radically alter our lives in ways that are still unknown. Advances in AI technology are developing at an extremely rapid rate as computational power continues to grow exponentially. Even if existential concerns about AI do not materialise, there are enough concrete examples of problems associated with current applications of artificial intelligence to warrant concern about the level of control that exists over developments in AI. Some form of regulation is likely necessary to protect society from risks of harm. However, advances in regulatory capacity have not kept pace with developments in new technologies including AI. This is partly because regulation has become decentered; that is, the traditional role of public regulators such as governments commanding regulation has been dissipated and other participants including those from within the industry have taken the lead. Other contributing factors are the dwindling of resources in governments on the one hand and the increased power of technology companies on the other. These factors have left the field of AI development relatively unregulated. Whatever the reason, it is now more difficult for traditional public regulatory bodies to control the development of AI. In the vacuum, industry participants have begun to self-regulate by promoting soft law options such as codes of practice and standards. We argue that, despite the reduced authority of public regulatory agencies, the risks associated with runaway AI require regulators to begin to participate in what is largely an unregulated field. In an environment where resources are scarce, governments or public regulators must develop new ways of regulating. This paper proposes solutions to regulating the development of AI ex ante. We suggest a two-step process: first, governments can set expectations and send signals to influence participants in AI development. We adopt the term nudging to refer to this type of influencing. Second, public regulators must participate in and interact with the relevant industries. By doing this, they can gather information and knowledge about the industries, begin to assess risks and then be in a position to regulate those areas that pose most risk first. To conduct a proper risk analysis, regulators must have sufficient knowledge and understanding about the target of regulation to be able to classify various risk categories. We have proposed an initial classification based on the literature that can help to direct pressing issues for further research and a deeper understanding of the various applications of AI and the relative risks they pose.

Associated spaces


Something wrong with this information? Report errors here.