By Guest Work 15/05/2018


By Matt Boyd and Nick Wilson

We have just published an article (free online) on existential risks – with an NZ orientated perspective.1 In this blog we discuss some of the issues that NZ society could start to discuss.

Do we value future people?

Do we care about the wellbeing of people who don’t yet exist? Do we care whether the life-years of our great-grandchildren are filled with happiness rather than misery? Do we care about the future life-years of people alive now?

We are assuming you may answer “yes” in general terms, but in what way do we care?

You might merely think, ‘It’d be nice if they are happy and flourish’, or you may have stronger feelings such as, ‘they have as much right as me to at least the same kind of wellbeing that I’ve had’. The point is that the question can be answered in different ways.

All this is important because future people, and the future life-years of people living now, face serious existential threats. Existential threats are those that quite literally threaten our existence. These include runaway climate change, ecosystem destruction and starvation, nuclear war, deadly bioweapons2, asteroid impacts, or artificial intelligence (AI) run amok3 to name just a few.

The question, to what degree we ought to protect future people, is one that as a society we don’t seem to discuss much. We certainly don’t appear to have clear policy guidance on the minimal rights of future people.

When there are several coherent positions available to policymakers, we ought to have public engagement and community debate to ensure investments are consistent with public views.

These are the themes that our recent article1 in the NZ orientated journal Policy Quarterly introduces. We hope to start a local conversation about these issues.

Different ways to value future lives

Deciding whether or not we value the life-years of future people (who may not exist yet) is only the first step. We then need to decide how we value them, because the different options have implications for policy. In particular our values will critically impact on how we approach the threats from climate change, nuclear weapons, artificial intelligence, bioweapons made using synthetic biology, and so on.

We may, for example, value future people’s life-years less than the future life-years of those currently alive. This might be justified by arguing that we currently value people closer to us in space (e.g. family, or fellow nationals) more than we value people in far-flung countries (whether this is right or wrong is another matter!) So perhaps we ought to value those alive today more than those alive in a distant time too?

Or, perhaps we ought to value all future lives equally with the way we value the lives of those alive now. Perhaps some principle of universal justice, no matter when you are born, ought to apply.

On the other hand, we may not value future lives at all. Some ‘person-affecting’ views of morality claim that we can only do wrong if we affect someone, but future people don’t presently exist, so who is affected?

A further complication is that sometimes we favour known individuals in present danger rather than statistical lives at risk. For example we may fund expensive emergency surgical procedures, rather than cheap vaccines.

If ethics is impartial, where the wellbeing of one person does not automatically trump the wellbeing of someone else, then distance in relatedness, location, and perhaps time, might not be relevant.

Perhaps it is not future lives that we ought to value, but rather the continuation of our collective ‘human project’ – that project of artistic creation, scientific understanding and the exploration of our world and other worlds. Perhaps it is very important to us that some people survive in the future, but we don’t much care which people they are?

There are many different ways we can value the life-years of future people, and many different ways we can approach mitigating the risks to these lives. As a society we should be talking about this.

Uncertainty and discounting

With existential risks, the issue of prevention is complex, because unlike fires and road crashes we don’t really have an idea of the probabilities involved. However, the potential loss is catastrophic.

Also, we are uncertain of the needs of future people. They may be much more wealthy than we are, with technology we can’t imagine.

These uncertainties might justify some discounting of the value of future lives.

On the other hand, human life is a different kind of good than other resources, because human lives are not obviously tied to estimates of inflation or depreciation and future value as material goods are. Therefore, there seem to be no good reasons to prefer any particular discount rate.

We need reasonable public deliberation in order to reach consensus on these issues. The outcome of deliberations should determine any discount rate on the value of future lives. If it turns out that the public favors a low discount on future lives then we should shift significant resources into mitigation of existential risks.

The number of lives at risk

The number of NZ life-years at risk is vast, and life-years at risk (one measure of utility) are currently a central concern to the NZ Government when setting policy.

If we assume a stable NZ population of 6 million, then over the next 1000 years (a comparable time to that which humans have inhabited NZ to date) there is a cumulative total of 70 million life-years among those NZers already alive (14% of the total) and 515 million life-years among NZers not-yet-born (at a discount rate of 1%).

Whether we calculate the life-years at risk of those presently alive, or of all future NZers the numbers at risk are vast. Even with discounting this may justify substantial investment in mitigation and resilience.

How much ought we to invest?

Piggy bankThe question of ‘how much?’ is a complex interface where values meet cost-utility. As a simple exercise, consider valuing a life-year at a per capita GDP of NZ$45,000, consider the 585 million future NZer life-years at risk and consider a probability of 0.1% for an existential threat (of any kind) occurring in the next year. Given these values, it would be economically rational for NZ society to invest up to NZ$26 billion in eliminating that risk.

Perhaps we should be investing in determining more precisely what these probabilities are likely to be, and working harder to reduce the likelihood. As one recent paper notes, there is no fire alarm for harmful AI4.

If we identify the relevant risks and mitigation strategies (and costs) then we can consider the present opportunity costs of taking action. Preferences in evaluating these costs and the benefits could be grounded in the views obtained from public engagement. For example, perhaps we ought to forgo billions of dollars of additional transport safety improvements and invest in resilience against existential risks instead. Canvassing the public’s views on the worth of past investment in risk mitigation might be important too, was it worth investing in vaccines? In the Earthquake Commission?

New Zealand is a small country, but has previously campaigned for nuclear arms control (with partial success in the end of atmospheric nuclear weapons testing). Our long-term thinking has resulted in the Superannuation Fund, the EQC, and setting aside national parks and marine reserves for posterity. However, we could further protect our future with disaster infrastructure, disarmament negotiations, safety regulations for bio-threats, ethical training for AI engineers, pandemic responsiveness, or even contributions to underground or orbital habitats. Of course, all these would be more achievable if we co-operated with like-minded countries.

But ideally, we first need to establish what NZ society wants and values.

References

  1. Boyd, M., Wilson, N. Existential risks: New Zealand needs a method to agree a value framework and how to quantify future lives at risk. Policy Quarterly 2018 August, online first: https://www.victoria.ac.nz/__data/assets/pdf_file/0011/1501013/Boyd_Wilson.pdf
  2. Boyd M, Baker MG, Mansoor OD, Kvizhinadze G, Wilson N. Protecting an island nation from extreme pandemic threats: Proof-of-concept around border closure as an intervention. PLoS One 2017 Jun 16;12(6):e0178732. doi:10.1371/journal.pone.0178732.
  3. Boyd M, Wilson N. Rapid developments in artificial intelligence how might the New Zealand government respond? Policy Quarterly 2017;13(4):36-43. https://www.victoria.ac.nz/__data/assets/pdf_file/0010/1175176/Boyd.pdf
  4. Yudowsky, E. There’s no fire alarm for artificial general intelligence. Machine Intelligence Research Institute. Oct 13, 2017. https://intelligence.org/2017/10/13/fire-alarm/ (accessed May 8).