Why I fear “competition” being overrated, relative to “caution”

Free illustrations of Flower

By default, I expect a lot of people to gravitate toward the “competition” frame rather than the “caution” frame – for reasons that I don’t think are great, such as:

  • I think people naturally get more animated about “helping the good guys beat the bad guys” than about “helping all of us avoid getting a universally bad outcome, for impersonal reasons such as ‘we designed sloppy AI systems’ or ‘we created a dynamic in which haste and aggression are rewarded.'”
  • I expect people will tend to be overconfident about which countries, organizations or people they see as the “good guys.”
  • Embracing the “competition” frame tends to point toward taking actions – such as working to speed up a particular country’s or organization’s AI development – that are lucrative, exciting and naturally easy to feel energy for. Embracing the “caution” frame is much less this way.
  • The biggest concerns that the “caution” frame focuses on — Misaligned AI and Adversarial Technological Maturity — are a bit abstract and hard to wrap one’s head around. In many ways they seem to be the highest-stakes risks, but it’s easier to be viscerally scared of “falling behind countries/organizations/people that scare me” than to be viscerally scared of something like “Getting a bad outcome for the long-run future of the galaxy because we rushed things this century.”
    • I think Misaligned AI is a particularly hard risk for many to take seriously. It sounds wacky and sci-fi-like; people who worry about it tend to be interpreted as picturing something like The Terminator, and it can be hard for their more detailed concerns to be understood.
    • I’m hoping to run more posts in the future that help give an intuitive sense for why I think Misaligned AI is a real risk.

So for the avoidance of doubt, I’ll state that I think the “caution” frame has an awful lot going for it. In particular, Misaligned AI and Adversarial Technological Maturity seem a lot worse than other potential transition types, and both seem like things that have a real chance of making the entire future of our species (and successors) much worse than they could be.

I worry that too much of the “competition” frame will lead to downplaying misalignment risk and rushing to deploy unsafe, unpredictable systems, which could have many negative consequences.

With that said, I put serious weight on both frames. I remain quite uncertain overall about which frame is more important and helpful (if either is).

Key open questions for “caution” vs. “competition”

People who take the “caution” frame and people who take the “competition” frame often favor very different, even contradictory actions. Actions that look important to people in one frame often look actively harmful to people in the other.

For example, people in the “competition” frame often favor moving forward as fast as possible on developing more powerful AI systems; for people in the “caution” frame, haste is one of the main things to avoid. People in the “competition” frame often favor adversarial foreign relations, while people in the “caution” frame often want foreign relations to be more cooperative.

(That said, this dichotomy is a simplification. Many people – including myself – resonate with both frames. And either frame could imply actions normally associated with the other; for example, you might take the “caution” frame but feel that haste is needed now in order to establish one country with a clear enough lead in AI that it can then take its time, prioritize avoiding misaligned AI, etc.)

I wish I could confidently tell you how much weight to put on each frame, and what actions are most likely to be helpful. But I can’t. I think we would have more clarity if we had better answers to some key open questions:

Like Our Story ? Donate to Support Us, Click Here

You want to share a story with us? Do you want to advertise with us? Do you need publicity/live coverage for product, service, or event? Contact us on WhatsApp +16477721660 or email Adebaconnector@gmail.com

Leave a Reply

Your email address will not be published. Required fields are marked *