-
TypeJournal Article
-
Published in
-
Year2021
-
Author(s)
Sætra, Henrik Skaug -
URL
-
AccessOpen access
-
Search
Google Scholar Google -
ID
1011037
A Framework for Evaluating and Disclosing the ESG Related Impacts of AI with the SDGs
Artificial intelligence (AI) now permeates all aspects of modern society, and we are simultaneously seeing an increased focus on issues of sustainability in all human activities. All major corporations are now expected to account for their environmental and social footprint and to disclose and report on their activities. This is carried out through a diverse set of standards, frameworks, and metrics related to what is referred to as ESG (environment, social, governance), which is now, increasingly often, replacing the older term CSR (corporate social responsibility). The challenge addressed in this article is that none of these frameworks sufficiently capture the nature of the sustainability related impacts of AI. This creates a situation in which companies are not incentivised to properly analyse such impacts. Simultaneously, it allows the companies that are aware of negative impacts to not disclose them. This article proposes a framework for evaluating and disclosing ESG related AI impacts based on the United Nation’s Sustainable Development Goals (SDG). The core of the framework is here presented, with examples of how it forces an examination of micro, meso, and macro level impacts, a consideration of both negative and positive impacts, and accounting for ripple effects and interlinkages between the different impacts. Such a framework helps make analyses of AI related ESG impacts more structured and systematic, more transparent, and it allows companies to draw on research in AI ethics in such evaluations. In the closing section, Microsoft’s sustainability reporting from 2018 and 2019 is used as an example of how sustainability reporting is currently carried out, and how it might be improved by using the approach here advocated.
Something wrong with this information? Report errors here.