In recent months there has been more discussion around about the idea that 'advanced' life in the universe might employ artificial intelligences to do the dirty work of exploring. Indeed, AI might just push off on its own accord and could, it is presumed, be made effectively immortal. These patient devices could then withstand the enormous times and extreme environments involved in crossing interstellar, or even intergalactic, space. In part these ideas are an attempt to narrow the search parameters for our efforts to spot signs of intelligent life in the universe - as nicely discussed by Paul Gilster. There is something intriguing about these notions, but they also raise some other questions. I thought I'd try to stir things up a bit.
Without getting too deeply into the meaning of AI (Turing tests and the like), it's interesting to consider the history of attempts to build such things here on Earth. In the late 1950's there was a huge push to construct an AI, everyone felt it was only a decade or so away, governments dumped money into research - to little obvious avail except for general advances in computing. In the early 1980's there was another bout of enthusiasm and billions of dollars got funneled into AI work. Most of the focus was on the idea of expert systems - the idea that much of intelligence stems from the ability to assimilate and utilize large amounts of data; knowledge was the fuel for AI. Another bubble has happened more recently with the notion that true AI cannot be some disembodied piece of code, it needs a robotic avatar in the real world. Incredible things have been done in the name of AI. The fundamentals of modern programming have to some extent been reshaped by this thinking, and we indeed now experience the rudiments of AI every time Amazon or Netflix gleefully present us with 'suggestions' for what we might want to spend our money on. However, the true AI that we all imagine is nowhere yet to be seen.
Indeed, there are even deeper questions and doubts about the whole idea of a physical machine that can mimic a living mind. Roger Penrose in his dense but fascinating 480 page argument - The Emperor's New Mind - made a pretty sobering case that quantum mechanical processes are central to our type of intelligence. Of course this was written prior to gains in quantum computation, but its thesis has not yet been disproved.
So where am I going with all this? I think there are two important, although speculative, points that come from our baby-steps in AI. The first is the idea that if we keep working at it then one day an AI will 'appear', like a new iPad or model of car. What if that conceit is just wrong? It certainly appears incorrect so far. It may be that AI is something that emerges very, very gradually. So gradually in fact that we barely realize it's happening, just like the processes of natural selection and evolution. As we upload more and more of ourselves - our knowledge - onto Google or Facebook we may actually be pushing the process along, but we could have a thousand years or more to go. The second point is related. What if Penrose is correct, and only quantum processes have access to the right amount and type of computing power for intelligence? True AI may well be indistinguishable from biology.
What this could mean is that there will never be AIs in the universe that can be distinguished from their biological parents (I know, shades of Battlestar Galactica, but fiction is often prescient). There could be 'dumb' AI explorers - complex machines of limited flexibility - but there will never be true AI exploration, because those AI would likely be subject to the same problems that we have with interstellar distances and timescales. Thus, if we want to try to define a more targeted set of search criteria for 'intelligent' life in the universe (which is certainly not a bad idea), there are probably two options. One set of life is going to be just like us - whether it's AI or not. The other is going to be extremely machine-like, not super-Turing-test-smart, but well programmed and equipped with a big knowledge base. So the question is, what would exo-Google be doing out there in the universe?....
I think we´re not even making the so-called "baby steps" into the AI research... To date, there´s no machine capable of mimic the ant behaivour for example (let alone the question of the miniautarization in the ant´s body.) Ants can model the world around them in a way far, far more complex and accurate than our most advanced machine perception device.
ReplyDeleteExcelent blog, congratulations. (Sorry for my bad english)
Fascinating post, Caleb. I'm a great admirer of Roger Penrose and have always thought his critique of AI was a real stumbling block to developing the all-knowing systems many people today assume are coming. I have yet to see Penrose refuted, though the issue of quantum processes and intelligence may be at the heart of the matter. We have much to learn on this.
ReplyDeleteI think it is pretty clear that human intelligence is based on neurons. The processes going on at the basic level are quite well understood: the integration of synaptic signals, the action potential, and signal conduction by membrane depolarization. A lot of electrophysiology and stochastics, but no role for quantum mechanics by any stretch of the imagination.
ReplyDeleteI am convinced, together with the vast majority of neuroscientists, that the key to intelligence is not some mysterious dualistic quality, not even quantum mechanics, but simply the quantity and complexity of neuronal connections. Transistors are superior to neurons in speed, accuracy, power consumption, and density. The only thing keeping us from AI is lack of knowledge and understanding, i.e. the limits of our own biological intelligence.
Despite the skeptical tone in your (excellent, as usual) post, I think that enormous progress has been made. I expect to be still living by the time automated customer representatives are smarter and more helpful than live ones...
What makes me worry a bit is the potential of a Google-like web entity with (mere) human level intelligence. The amount of knowledge and computing power available to such an entity is mind-boggling.
@Anonymous: Ants have a very limited model of the world around them. Far less complex than, for example, the self-driving cars Google is developing.
ReplyDeletehttp://www.smartplanet.com/technology/blog/thinking-tech/googles-self-driving-car/5445/
Interesting points. I should emphasize that I'm not advocating that intelligence is some mysterious phenomenon linked to weird quantum effects - rather I think that Penrose may have a point that there are aspects to the computational challenges of 'intelligence' that require the extra oomph of the pure quantum world (we know that entanglement for example does rear it's head in the semi-classical regime such as large molecular complexes). The easiest access to these tools may be the ones that biology has incorporated.
ReplyDeleteI'd fully admit that the sheer CPU cycles of a mammalian brain are astonishing, and the abundance of neurons suggests a good match - however that doesn't mean that a silicon representation, even an exact one, will do the same things if I doesn't capture the subtleties - it *might*, but perhaps not.
It's fun to ponder all this regardless.
Penrose is obviously a brilliant guy. But I think he's wrong on the consciousness/quantum issue. I think he makes a lousy extrapolation from the Godel-Turing theorem. Why does the fact that a computer has a "Godel sentence" mean it can't be conscious? After all, we could essentially be computers (he hasn't disproven this yet) and we certainly have no idea what our Godel sentence is. There are much simpler sentences that we have no idea whether they are computable -- such as Goldbach's conjecture (that every even number greater than 2 is the sum of two primes). The book is interesting, but I think sheds no light on the computability of human consciousness, quantum or otherwise.
ReplyDeleteMy guess: exo-google is wandering the universe for something to make money off of besides search.
ReplyDeleteCaleb, by any reasonable definition of "exact representation", if such a thing were possible, it would behave "exactly" like the original. Of course, this is not the way to do AI.
ReplyDeleteOn the QM point, if there was even a hint of evidence that neuronal computation involved quantum coherence, I would give Penrose the benefit of the doubt. But there is no such evidence. The notion of a macroscopic, messy biological entity such as a neuron exhibiting quantum behavior is, in my view, absurd. Following that, since the brain does not use QM, there is no reason why AI would have to.
Now, if our brain was based on the bacterial electric wires from your last post, this might be a slightly better case to be made ...
I'm happy to concede that QM may not play a role in the brain (at least stuff like coherence etc). I guess my more modest suggestion was just that a true AI might be a lot more like a biological organism (and susceptible to all that frailty) than a chunk of doped silicon - for reasons of energy use, signal speed etc.
ReplyDeleteI don't know if this is readily accessible without a subscription, but Koch and Hepp published a nice discussion a few years back of why QM computation is unlikely in the brain :
http://www.nature.com/nature/journal/v440/n7084/full/440611a.html
though of course you'll recall that there *is* evidence for entanglement in some molecular structures in photosynthesis (http://arxiv.org/pdf/0905.3787), as well as in avian radical-pair 'compasses', albeit fleetingly, so I do think it's premature to say that big, warm, and messy environments cannot exhibit quantum effects.
Roger Penrose's argument is a slight of hand with regards to creating intelligence intelligence. Penrose's argument is that cognition is quantum computation rather than classical computation, nothing more. Obviously, if you can manufacture quantum computers in the lab, you can most certainly make artificial intelligence based on them.
ReplyDeleteConsider Penrose's arguments as a proof of concept for room-temperature quantum computers. That's how I look at it.
True AI may well be indistinguishable from biology.
ReplyDeleteI think this to be the case even if Penrose is wrong about consciousness involving quantum processes. What is very clear is that brains are very different from semiconductor-based computers, both in structure and function. Regardless whether consciousness if quantum, classical, or anything else; it is clearly a function of the neurobiology of the brain. Thus, it is likely that the neurobiology or the synthetic equivalent of it (synthetic biology) is necessary to realize sentient A.I.
Biologically oriented transhumanists such as Gregory Stock believe this to be true.
@kurt9: Did you just say "consciousness is based on neurobiology, therefore neurobiology is necessary for consciousness"? This is obviously invalid logic, similar to this: "My house is built from wood, therefore wood is necessary to build houses"
ReplyDeleteEniac,
ReplyDeleteYou're right that its invalid logic. However, assuming that a system similar to that of neurobiology (e.g. synthetic biology) is necessary to create sentient A.I. is a conservative assumption. Of course, it may not be true.
Are you looking for free YouTube Views?
ReplyDeleteDid you know you can get them ON AUTOPILOT & ABSOLUTELY FREE by using Like 4 Like?