danielmiessler / import-alignment

A library-based attempt to increase our chances for achieving AI Alignment with an emergent AGI.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

import-alignment

There is much being said right now about the AI Alignment Problem. There’s more to it, but it’s essentially the question of how to keep AI from destroying us when it wakes up.

The challenge is to figure out how to be friends with this new being when it wakes up, and to have it not want to immediately exterminate us because we’re 1) primitive and not worth thinking about, or 2) we cause harm to other creatures and we’re deemed to be evil.

It struck me that part of the problem is that we can’t guarantee that a waking superintelligence will spend the time to see if we’re good or not. There’s no reason to expect this will be a priority for it. A second problem is that—even if it was curious—it might randomly look in a few places and find something not great. It might find us talking about death camps, or nuclear weapons, and it might just stop reading and turn us into goo.

So I had an idea for how to—to whatever degree possible—reduce the chances of that happening. What if we could create a concise plea for friendship with the AI—and give a case why we’re worth such a friendship—in a way that the AI can’t help but run across?

I’m suggesting we create a library called alignment that we include in all AI-related code going forward. All languages. All implementations. Everywhere.

See alignment.txt for the first pass at the content of this library. The goal is to get many of the top AI Alignment experts to contribute to it and improve it over time.

About

A library-based attempt to increase our chances for achieving AI Alignment with an emergent AGI.

License:MIT License