The Last Question
What is the fastest way to reliably align a powerful AGI around the safe performance of some limited task that is potent enough to save the world from unaligned AGI?
To arrive at the edge of the world's knowledge, seek out the most complex and sophisticated minds, put them in a room together, and have them ask each other the questions they are asking themselves.
What is the fastest way to reliably align a powerful AGI around the safe performance of some limited task that is potent enough to save the world from unaligned AGI?