Questions about emerging technologies

Crucial Questions for Longtermists - Part 2

  • Value of, and best approaches to, work related to AI
    • Is it possible to build an artificial general intelligence (AGI) and/or transformative AI (TAI) system? Is humanity likely to do so?
    • What form(s) is TAI likely to take? What are the implications of that? (E.g., AGI agents vs comprehensive AI services)
    • What will the timeline of AI developments be?
      • How “hard” are various AI developments?
      • How much “effort” will go into various AI developments?
      • How discontinuous will AI development be?
        • Will development to human-level AI be discontinuous? How much so?
        • Will development from human-level AI be discontinuous? How much so?
        • Will there be a hardware overhang? How much would that change things?
      • How important are individual insights and “lumpy” developments?
      • Will we know when TAI is coming soon? How far in advance? How confidently?
      • What are the relevant past trends? To what extent should we expect them to continue?
    • How much should longtermists’ prioritise AI?
      • How high is existential risk from AI?
      • How “hard” is AI safety?
        • How “hard” are non-impossible technical problems in general?
        • To what extent can we infer that the problem is hard from failure or challenges thus far?
      • Should we expect people to handle AI safety and governance issues adequately without longtermist intervention?
        • To what extent will “safety” problems be solved simply in order to increase “capability” or “economic usefulness”?
        • Would there be clearer evidence of AI risk in future, if it’s indeed quite risky? Will that lead to better behaviours regarding AI safety and governance?
      • Could AI pose suffering risks? Is it the most likely source of such risks?
      • How likely are positive or negative “non-existential trajectory changes” as a result of AI-related events? To what extent does that mean longtermists should prioritise AI?
    • What forms might an AI catastrophe take? How likely is each?
    • What are the best approaches to reducing AI risk or increasing AI benefits?
      • From a longtermist perspective, how valuable are approaches focused on relatively “near-term” or “less extreme” AI issues?
      • What downside risks might (various forms of) work to reduce AI risk have? How big are those downside risks?
        • How likely is it that (various forms of) work to reduce AI risk would accelerate the development of AI? Would that increase overall existential risk?
      • How important is AI governance/strategy/policy work? Which types are most important, and why?
  • Value of, and best approaches to, work related to biorisk and biotechnology
    • What will the timeline of biotech developments be?
      • How “hard” are various biotech developments?
      • How much “effort” will go into various biotech developments?
    • How much should longtermists’ prioritise biorisk and biotech?
      • How high is existential risk from pandemics involving synthetic biology?
        • Should we be more concerned about accidental or deliberate creation of dangerous pathogens? Should we be more concerned about accidental or deliberate release? What kinds of actors should we be most concerned about?
      • How high is existential risk from naturally arising pandemics?
        • To what extent does the usual “natural risks must be low” argument apply to natural pandemics?
      • What can we (currently) learn from previous pandemics, near misses, etc.?
      • How high is the risk from antimicrobial resistance?
    • How much overlap is there between approaches focused on natural vs. anthropogenic pandemics, “regular” vs. “extreme” risks, etc.?
    • What are the best approaches to reducing biorisk?
      • What downside risks might (various forms of) work to reduce biorisk have? How big are those downside risks?
  • Value of, and best approaches to, work related to nanotechnology
    • What will the timeline of nanotech developments be?
      • How “hard” are various nanotech developments?
      • How much “effort” will go into various nanotech developments?
    • How high is the existential risk from nanotech?
    • What are the best approaches to reducing risks from nanotechnology?
      • What downside risks might (various forms of) work to reduce risks from nanotech have? How big are those downside risks?
  • Value of, and best approaches to, work related to interactions and convergences between different emerging technologies

Questions about specific existential risks (which weren’t covered above)

  • Value of, and best approaches to, work related to nuclear weapons
    • How high is the existential risk from nuclear weapons?
      • How likely are various types of nuclear war?
        • What countries would most likely be involved in a nuclear war?
        • How many weapons would likely be used in a nuclear war?
        • How likely is counterforce vs. countervalue targeting?
        • How likely are accidental launches?
        • How likely is escalation from accidental launch to nuclear war?
      • How likely are various severities of nuclear winter (given a certain type and severity of nuclear war)?
      • What would be the impacts of various severities of nuclear winter?
  • Value of, and best approaches to, work related to climate change
    • How high is the existential risk from climate change itself (not from geoengineering)?
      • How much climate change is likely to occur?
      • What would be the impacts of various levels of climate change?
      • How likely are various mechanisms for runaway/extreme climate change?
    • How tractable and risky are various forms of geoengineering?
      • How likely is it that risky geoengineering could be unilaterally implemented?
    • How much does climate change increase other existential risks?
  • Value of, and best approaches to, work related to totalitarianism and dystopias
    • How high is the existential risk from totalitarianism and dystopias?
      • How likely is the rise of a global totalitarian or dystopian regime?
      • How likely is it that a global totalitarian or dystopian regime that arose would last long enough to represent or cause an existential catastrophe?
    • Which political changes could increase or decrease existential risks from totalitarianism and dystopia? By how much? What other effects would those political changes have on the long-term future?
      • Would various shifts towards world government or global political cohesion increase risks from totalitarianism and dystopia? By how much? Would those shifts reduce other risks?
      • Would enhanced or centralised state power increase risks from totalitarianism and dystopia? By how much? Would it reduce other risks?
    • Which technological changes could increase or decrease existential risks from totalitarianism and dystopia? By how much? What other effects would those political changes have on the long-term future?
      • Would further development or deployment of surveillance technology increase risks from totalitarianism and dystopia? By how much? Would it reduce other risks?
      • Would further development or deployment of AI for police or military purposes increase risks from totalitarianism and dystopia? By how much? Would it reduce other risks?
      • Would further development or deployment of genetic engineering increase risks from totalitarianism and dystopia? By how much? Would it reduce other risks?
      • Would further development or deployment of other technologies for influencing/controlling people’s values increase risks from totalitarianism and dystopia? By how much?
      • Would further development or deployment of life extension technologies increase risks from totalitarianism and dystopia? By how much?

Like Our Story ? Donate to Support Us, Click Here

You want to share a story with us? Do you want to advertise with us? Do you need publicity/live coverage for product, service, or event? Contact us on WhatsApp +16477721660 or email Adebaconnector@gmail.com

Leave a Reply

Your email address will not be published. Required fields are marked *