Centre for Environmental Change and Human Resilience, University of Dundee
Abstract: Widespread adoption of the carbon law roadmap likely means a major investment in technological innovations, including those using artificial intelligence (AI). This occurs at the same time significant concerns are being raised about AI safety risks and human labour displacement through automation. Adoption of the carbon law roadmap must therefore include social equity and AI safeguards.
In their recent and widely publicised paper Rockström et al. (2017) describe what they term a ‘carbon law’ roadmap to rapidly decarbonize the atmosphere in order to stem global warming in line with the internationally agreed target of well below 2.0C. Essentially, they argue that in order to meet this critical challenge, humanity must halve, and halve again greenhouse gas emissions as well as remove atmospheric carbon at decadal rates until 2050 in a manner akin to Moore’s Law for the doubling rates of computing power. Central to this massive enterprise are the needs for technological innovations in carbon-free energy production and atmospheric carbon removal. While there can be no doubt about the need for such a massive shift if humanity is to have a credible chance of keeping planetary warming to within Paris Agreement limits, the carbon law roadmap is also heavily dependent on technological innovation in key areas such as energy production, transportation and carbon capture and storage.
Such a radical plan for rapid technological innovation occurs while other widely discussed research stresses existential risks for humanity stemming from the rise of artificial intelligence (AI) and the mass obsolescence of human wage labour due to automation. For example, a recent prediction suggests that up to 47% of ‘routine’ US jobs are at risk from automation within the next 20 years (Frey and Osborne, 2017). What is more, AI theorists propose that in the coming years or decades advances in machine intelligence could produce technology able to meet or exceed human intelligence and capabilities (known as superintelligence or HLMI – High-level machine intelligence) that may render humans superfluous to machine aims where AI machines are able to self-perpetuate and expand in scope of capability and scale beyond human controls (Bostrom, 2014; Bostrom et al., 2016). The midpoint estimates of achieving HLMI have recently been speculated by industry experts at a 10% chance by 2024 and 50% by 2050 and 90% by 2075 (Müller and Bostrom, 2016), which roughly parallels the carbon law timeline. A worrisome scenario thus exists where the carbon law roadmap leads to a crossing of the HLMI threshold, thereby compounding a different risk for humanity as it attempts to halt another.
The risks involved in implementing the carbon law roadmap therefore involve a further sophistication of Moore’s Law, where the innovation required to decarbonise society and the atmosphere also result in even more rapid advances in machine intelligence and automation at the expense of the necessity for human beings relative to the aim of decarbonisation. At best, this may mean AI and automation play significant roles in designing, producing and even implementing low or zero/negative-carbon technologies at least decadal doubling rates. At worst, major investment in carbon law schemes may result in innovations leading to a machine superintelligence that regards human beings as the causal entity of greenhouse gas emissions and that the most effective pathway toward rapid decarbonisation (a human or AI introduced goal parameter?) is to regard humans as a carbon intensive technology. A heuristic representation of this relationship is presented in figure 1. Indeed, science and society have already drawn a form of this conclusion through the anthropogenic global warming consensus that the Carbon Law aims to resolve, as well as the general Anthropocene proposition (Lewis and Maslin, 2015).
Thus, should the carbon law roadmap be adopted at scale, very careful attention must be paid by researchers, industry, and policymakers to how policies, investments and technologies are crafted and implemented to avoid the displacing the need for humanity in a rapid technology-driven push for carbon reduction. The nascent principles of ‘friendly AI’ (Muehlhauser and Bostrom, 2014) and technology justice (Miekle, 2016) suggest conceptual research and policy pathways forward to help ensure humanity remains relevant in the effort to decarbonise society. Friendly AI involves steps that ensure AI does not harm humanity. Technology justice is a principle that ensures the fair and helpful distribution of technology across society to promote equity. Incorporating these kinds of principles into carbon law efforts will help ensure that any implementation of the roadmap contains safeguards for social equity and major unintended AI risks.
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Bostrom N (2014) Superintelligence: paths, dangers, strategies. First edition. Oxford: Oxford University Press.
Bostrom N, Dafoe A and Flynn C (2016) Policy Desiderata in the Development of Machine Superintelligence [version 3.6]. Working Paper, Oxford: Future of Humanity Institute.
Frey CB and Osborne MA (2017) The future of employment: How susceptible are jobs to computerisation? Technological Forecasting and Social Change 114: 254–280.
Lewis SL and Maslin MA (2015) Defining the Anthropocene. Nature 519(7542): 171–180.
Miekle A (2016) Technology justice: a call to action. Rugby, UK: Practical Action Publishing. Available from: http:// dx.doi.org/10.3362/9781780446585.
Muehlhauser L and Bostrom N (2014) Why we need friendly AI. Think 13(36): 41–47.
Müller VC and Bostrom N (2016) Future Progress in Artificial Intelligence: A Survey of Expert Opinion. In: Müller VC (ed.), Fundamental Issues of Artificial Intelligence, Cham: Springer International Publishing, pp. 555–572. Available from: http://dx.doi.org/10.1007/978-3-319-26485-1_33.
Rockström J, Gaffney O, Rogelj J, et al. (2017) A roadmap for rapid decarbonization. Science 355(6331): 1269.