Questions about non-specific risks, existential risk factors, or existential security factors

Crucial Questions for Longtermists - Part 3

See here for notes and links related to these topics.

  • Value of, and best approaches to, work related to global catastrophes and/or civilizational collapse
    • How much should we be concerned by possible concurrence, combinations, or cascades of catastrophes?
    • How much worse in expectation would a global catastrophe make our long-term trajectory?
      • How effectively, if at all, would a global catastrophe serve as a warning shot?
      • What can we (currently) learn from previous global catastrophes (or things that came close to being global catastrophes)?
    • How likely is collapse, given various intensities of catastrophe?
      • How resilient is society?
    • How likely would a collapse make each of the following outcomes: Extinction; permanent stagnation; recurrent collapse; “scarred” recovery; full recovery?
      • What’s the minimum viable human population (from the perspective of genetic diversity)?
      • How likely is economic and technological recovery from collapse?
        • What population size is required for economic specialisation, technological development, etc.?
      • Might we have a “scarred” recovery, in which our long-term trajectory remains worse in expectation despite economic and technological recovery? How important is this possibility?
      • What can we (currently) learn from previous collapses of specific societies, or near-collapses?
    • What are the best approaches for improving mitigation of, resilience to, and recovery from global catastrophes and/or collapse (rather than preventing them)? How valuable are these approaches?
      • (How much) Should we worry about “moral hazard”?
      • (How much) Should we worry about “which world gets saved”?
  • Value of, and best approaches to, work related to war
    • By how much does the possibility of various levels/types of wars raise total existential risk?
      • How likely are wars of various levels/types?
        • How likely are great power wars?
    • By how much do wars of various levels/types increase existential risk?
      • By how much do great power wars increase existential risk?
  • Value of, and best approaches to, work related to improving institutions and/or decision-making
  • Value of, and best approaches to, work related to existential security and the Long Reflection
    • Can we achieve existential security? How?
    • Are there downsides to pursuing existential security? If so, how large are they?
    • How important is it that we have a Long Reflection process? What should such a process involve? How can we best prepare for and set up such a process?

We have also collected here some questions that seem less important, or where it’s not clear that there’s really disagreement on them that fuels differences in strategic views and choices among longtermists. These include questions about “natural” risks (other than “natural” pandemics, which some of the above questions already addressed).

Directions for future work

We’ll soon publish a post discussing in more depth the topic of optimal timing for work and donations). We’d also be excited to see future work which:

  • Provides that sort of more detailed discussion for other topics raised in this post
  • Attempts to actually answer some of these questions, or to at least provide relevant arguments, evidence, etc.
  • Identifies additional crucial questions
  • Highlights additional relevant references
  • Further discusses how beliefs about these questions empirically do and/or logically should relate to each other and to strategic views and choices
    • This could potentially be visually “mapped”, perhaps with a similar style to that used in this post.
    • This could also include expert elicitation or other systematic collection of data on actual beliefs and decisions. That would also have the separate benefit of providing one “outside view”, which could be used as input into what one “should” believe about these questions.
  • Attempts to build formal models of what one should believe or do, or how the future is likely to go, based on various beliefs about these questions
    • Ideally, it would be possible for readers to provide their own inputs and see what the results “should” be

Such work could be done as standalone outputs, or simply by making commenting on this post or the linked Google docs. Please also feel free to get in touch with us if you are looking to do any of the types of work listed above.

Acknowledgements

This post and the associated documents were based in part on ideas and earlier writings by Justin Shovelain and David Kristoffersson, and benefitted from input from them. We received useful comments on a draft of this post from Arden Koehler, Denis Drescher, and Gavin Taylor, and useful comments on the section on optimal timing from Michael Dickens, Phil Trammell, and Alex Holness-Tofts. We’re also grateful to Jesse Liptrap for work on an earlier draft, and to Siebe Rozendal for comments on another earlier draft. This does not imply these people’s endorsement of all aspects of this post.

  1. Most of the questions we cover are actually also relevant to people who are focused on existential risk reduction for reasons unrelated to longtermism (e.g., due to person-affecting arguments, and/or due to assigning sufficiently high credence to near-term technological transformation scenarios). However, for brevity, we will often just refer to “longtermists” or “longtermism”.
  2. Of course, some questions about morality are relevant even if longtermism is taken as a starting assumption. This includes questions about how important reducing suffering is relative to increasing happiness, and how much moral status various beings should get. Thus, we will touch on such questions, and link to some relevant sources. But we’ve decided to not include such questions as part of the core focus of this post.
  3. For example, we get as fine-grained as “How likely is counterforce vs. countervalue targeting [in a nuclear war]?”, but not as fine-grained as “Which precise cities will be targeted in a nuclear war?” We acknowledge that there’ll be some arbitrariness in our decisions about how fine-grained to be.
  4. Some of these questions are more relevant to people who haven’t (yet) accepted longtermism, rather than to longtermists. But all of these questions can be relevant to certain strategic decisions by longtermists. See the linked Google doc for further discussion.
  5. See also our Database of existential risk estimates.
  6. This category of strategies for influencing the future could include work aimed towards shifting some probability mass from “ok” futures (which don’t involve existential catastrophes) to especially excellent futures, or shifting some probability mass from especially awful existential catastrophes to somewhat “less awful” existential catastrophes. We plan to discuss this category of strategies more in an upcoming post. We intend this category to contrast with strategies aimed towards shifting probability mass from “some existential catastrophe occurs” to “no existential catastrophe occurs” (i.e., most existential risk reduction work).
  7. This includes things like how likely “ok” futures are relative to especially excellent futures, and how likely especially awful existential catastrophes are relative to somewhat “less awful” ones.
  8. This is about altruism in a general sense (i.e., concern for the wellbeing of others), not just EA specifically.
  9. This refers to actions that speed development up in a general sense, or that “merely” change when things happen. This should be distinguished from changing which developments occur, or differentially advancing some developments relative to others.
  10. Biorisk includes both natural pandemics and pandemics involving synthetic biology. Thus, this risk does not completely belong in the section on “emerging technologies”. We include it here anyway because anthropogenic biorisk will be our main focus in this section, given that it’s the main focus of the longtermist community and that there are strong arguments that it poses far greater existential risk than natural pandemics do (see e.g. The Precipice).

Like Our Story ? Donate to Support Us, Click Here

You want to share a story with us? Do you want to advertise with us? Do you need publicity/live coverage for product, service, or event? Contact us on WhatsApp +16477721660 or email Adebaconnector@gmail.com

Leave a Reply

Your email address will not be published. Required fields are marked *