Dave Chalmers Singularity Lecture

A few weeks ago I went to Oxford to say hello to my friend Matt, and we went to a lecture on the singularity by David Chalmers. He covered many aspects, but one idea he talked about was that for safety reasons a superhuman AI should be developed in virtual reality. He said that the most important thing was that information shouldn't be allowed to leak in. Leaking out was less dangerous. A bit like the one-way mirrors they have in police interview rooms. The argument was that if information could leak out, people on the outside could be manipulated by the AI to free it.

An interesting idea, but I'm sceptical that we really can develop AIs safely. Perhaps the best we can do is to try to instil a moral principle that the strong shouldn't harm the weak. Since today's strong will be tomorrow's weak, as AIs gain in sophistication it should be in the strong's interest to uphold this principle. The problem is that it's a principle that can never be enforced by the weak, and so we'll always have to rely on the strong being responsible.