narrow the search parameters for our efforts to spot signs of intelligent life in the universe - as nicely discussed by Paul Gilster. There is something intriguing about these notions, but they also raise some other questions. I thought I'd try to stir things up a bit.
Without getting too deeply into the meaning of AI (Turing tests and the like), it's interesting to consider the history of attempts to build such things here on Earth. In the late 1950's there was a huge push to construct an AI, everyone felt it was only a decade or so away, governments dumped money into research - to little obvious avail except for general advances in computing. In the early 1980's there was another bout of enthusiasm and billions of dollars got funneled into AI work. Most of the focus was on the idea of expert systems - the idea that much of intelligence stems from the ability to assimilate and utilize large amounts of data; knowledge was the fuel for AI. Another bubble has happened more recently with the notion that true AI cannot be some disembodied piece of code, it needs a robotic avatar in the real world. Incredible things have been done in the name of AI. The fundamentals of modern programming have to some extent been reshaped by this thinking, and we indeed now experience the rudiments of AI every time Amazon or Netflix gleefully present us with 'suggestions' for what we might want to spend our money on. However, the true AI that we all imagine is nowhere yet to be seen.
Indeed, there are even deeper questions and doubts about the whole idea of a physical machine that can mimic a living mind. Roger Penrose in his dense but fascinating 480 page argument - The Emperor's New Mind - made a pretty sobering case that quantum mechanical processes are central to our type of intelligence. Of course this was written prior to gains in quantum computation, but its thesis has not yet been disproved.
So where am I going with all this? I think there are two important, although speculative, points that come from our baby-steps in AI. The first is the idea that if we keep working at it then one day an AI will 'appear', like a new iPad or model of car. What if that conceit is just wrong? It certainly appears incorrect so far. It may be that AI is something that emerges very, very gradually. So gradually in fact that we barely realize it's happening, just like the processes of natural selection and evolution. As we upload more and more of ourselves - our knowledge - onto Google or Facebook we may actually be pushing the process along, but we could have a thousand years or more to go. The second point is related. What if Penrose is correct, and only quantum processes have access to the right amount and type of computing power for intelligence? True AI may well be indistinguishable from biology.
What this could mean is that there will never be AIs in the universe that can be distinguished from their biological parents (I know, shades of Battlestar Galactica, but fiction is often prescient). There could be 'dumb' AI explorers - complex machines of limited flexibility - but there will never be true AI exploration, because those AI would likely be subject to the same problems that we have with interstellar distances and timescales. Thus, if we want to try to define a more targeted set of search criteria for 'intelligent' life in the universe (which is certainly not a bad idea), there are probably two options. One set of life is going to be just like us - whether it's AI or not. The other is going to be extremely machine-like, not super-Turing-test-smart, but well programmed and equipped with a big knowledge base. So the question is, what would exo-Google be doing out there in the universe?....