Is computer artificial intelligence a dead field?
The main problem with traditional artificial intelligence was that it separated mind from body, and imagined an asensitive brain as capable of thought. Most current theories consider the brain the result and organization of sensory input, and the body as basically the data gathering extension of a brain. In that sense, the brain-in-a-vat style of AI has nowhere to go. Artificial intelligence now has to be concerned with artificial life.
As others have said, the field “AI” is not dead – there is a tremendous amount of research/funding going on under that heading (google for “machine learning” for instance). The goals of the field are not to produce something like a human brain, though. The goals are typically to solve certain hard, specific problems that involve complex decision making, searching “intelligently” through large decision spaces, massively distributing problems over many autonomous agents, figuring out how to have autonomous agents act with “flexibility”, improve techniques of answering questions posed by humans given a large body of knowledge, etc. Almost no one is doing what the public thought of as AI back in the 70s and 80s. Even the methodologies popular then (expert systems, large scale symbolic reasoning systems) are not popular now. The only place that I can think of is the MIT media lab, where they do things such as try to make a robot face act “human” or convey emotion, that kind of thing. From m
Don’t assume Hubert Dreyfus is anywhere near right in his predictions. He’s somewhat softened his position (he used to say, in 1970, that computers would never be able to play chess well), and it’s unclear to me whether his position is now that AI is bankrupt in principle or if intelligent machines are actual possibilities, it would just be really hard to build them and we’re nowhere close to the right solution. Dreyfus is kind of a dirty word in the AI community. Anyway, as most people have pointed out, there are two strands of AI research. One is made up of engineers who are just interested in solving practical problems, like finding the shortest distance between two points on a map. They’ll borrow features from human reasoning willy-nilly and implement them in their programs if it thinks it’ll help them out, but they don’t especially care if their programs diverge radically from the way that human intelligence actually works. A lot of people working in this manner (mostly computer s
I study AI (among other things). AI has acquired a massive baggage of dead weight over the years – research that may have looked hopeful and exciting but went nowhere and now only serves to distract. A lot of people who should have known better made very assured statements and predictions which turned out to be meaningless. It is also a huge field, encompassing vastly different disciplines, approaches, and goals. AI is by no means dead. Like painquale said, there are two aspects to modern AI research. One is to solve practical “weak AI” problems with any of the approaches that AI has been interested in over its history. That field is huge, diverse, and producing tiny but useful results all the time – in fact, you probably don’t realize just how much weak AI is behind the technology you use every day. Most of the approaches being used are simply statistical learning methods. The other aspect is the pursuit of strong AI. This field is also huge, but most of the interesting research is fo