How to make the best of the most important century?

Free illustrations of Vintage

Previously in the “most important century” series, I’ve argued that there’s a high probability that the coming decades will see:

Is this an optimistic view of the world, or a pessimistic one? To me, it’s both and neither, because this set of events could end up being very good or very bad for the world, depending on the details of how it plays out.

When I talk about being in the “most important century,” I don’t just mean that significant events are going to occur. I mean that we, the people living in this century, have the chance to have a huge impact on huge numbers of people to come – if we can make sense of the situation enough to find helpful actions.

But it’s also important to understand why that’s a big “if” – why the most important century presents a challenging strategic picture, such that many things we can do might make things better or worse (and it’s hard to say which).

In this post, I will present two contrasting frames for how to make the best of the most important century:

  • The “Caution” frame. In this frame, many of the worst outcomes come from developing something like PASTA in a way that is too fast, rushed, or reckless. We may need to achieve (possibly global) coordination in order to mitigate pressures to race, and take appropriate care.
  • The “Competition” frame. This frame focuses not on how and when PASTA is developed, but who (which governments, which companies, etc.) is first in line to benefit from the resulting productivity explosion. 
  • People who take the “caution” frame and people who take the “competition” frame often favor very different, even contradictory actions. Actions that look important to people in one frame often look actively harmful to people in the other.
    • I worry that the “competition” frame will be overrated by default, and discuss why below.
    • To gain more clarity on how to weigh these frames and what actions are most likely to be helpful, we need more progress on open questions about the size of different types of risks from transformative AI.
  • In the meantime, there are some robustly helpful actions that seem likely to improve humanity’s prospects regardless.

The “caution” frame

I’ve argued for a good chance that this century will see a transition to a world where digital people or misaligned AI (or something else very different from today’s humans) are the major force in world events.

The “caution” frame emphasizes that some types of transition seem better than others. Listed in order from worst to best:

Worst: Misaligned AI

I discussed this possibility previously, drawing on a number of other and more thorough discussions. The basic idea is that AI systems could end up with objectives of their own, and could seek to expand throughout space fulfilling these objectives. Humans, and/or all humans value, could be sidelined (or driven extinct, if we’d otherwise get in the way).

Next-worst: Adversarial Technological Maturity

If we get to the point where there are digital people and/or (non-misaligned) AIs that can copy themselves without limit, and expand throughout space, there might be intense pressure to move – and multiply (via copying) – as fast as possible in order to gain more influence over the world. This might lead to different countries/coalitions furiously trying to outpace each other, and/or to outright military conflict, knowing that a lot could be at stake in a short time.

I would expect this sort of dynamic to risk a lot of the galaxy ending up in a bad state.

One such bad state would be “permanently under the control of a single (digital) person (and/or their copies).” Due to the potential of digital people to create stable civilizations, it seems that a given totalitarian regime could end up permanently entrenched across substantial parts of the galaxy.

People/countries/coalitions who suspect each other of posing this sort of danger – of potentially establishing stable civilizations under their control – might compete and/or attack each other early on to prevent this. This could lead to war with difficult-to-predict outcomes (due to the difficult-to-predict technological advancements that PASTA could bring about).

Second-best: Negotiation and governance

Countries might prevent this sort of Adversarial Technological Maturity dynamic by planning ahead and negotiating with each other. For example, perhaps each country – or each person – could be allowed to create a certain number of digital people (subject to human rights protections and other regulations), limited to a certain region of space.

It seems there are a huge range of different potential specifics here, some much more good and just than others.

Best: Reflection

The world could achieve a high enough level of coordination to delay any irreversible steps (including kicking off an Adversarial Technological Maturity dynamic).

There could then be something like what Toby Ord (in The Precipice) calls the “Long Reflection”: a sustained period in which people could collectively decide upon goals and hopes for the future, ideally representing the most fair available compromise between different perspectives. Advanced technology could imaginably help this go much better than it could today.

There are limitless questions about how such a “reflection” would work, and whether there’s really any hope that it could reach a reasonably good and fair outcome. Details like “what sorts of digital people are created first” could be enormously important. There is currently little discussion of this sort of topic.

Other

There are probably many possible types of transitions I haven’t named here.

Like Our Story ? Donate to Support Us, Click Here

You want to share a story with us? Do you want to advertise with us? Do you need publicity/live coverage for product, service, or event? Contact us on WhatsApp +16477721660 or email Adebaconnector@gmail.com

Leave a Reply

Your email address will not be published. Required fields are marked *