
|
AI
There are machines that perform better than humans -- airplanes and cars are stronger and faster, computers are better and faster at calculating.
But there is no machine that can think better than humans (or most other animals) and there might never be.
The confusion about AI, from the very beginning, has resulted from the idea that humans can make machines that are superior to ourselves and other naturally-occurring machines (animals, in other words). Stronger and faster is possible, more intelligent is not.
A chess program can sometimes beat humans because it's better at brute force calculating and remembering, not because it's more intelligent.
The Real PC
Thursday, October 9, 2003
Would you care to elaborate just which physical law supports your "Stronger and faster is possible, more intelligent is not" hypothesis?
Just me (Sir to you)
Thursday, October 9, 2003
Please define what you mean by 'think' and 'intelligent'. Any discussion is worthless without a common platform.
NC
Thursday, October 9, 2003
Of course, the critical question: will it help me get rid of spam?
Philo
Philo
Thursday, October 9, 2003
[Please define what you mean by 'think' and 'intelligent'.]
A machine can use logic. You can program your coffee machine, for example, with the rule "If it's 8 am, make coffee." You can also teach it the rule "If it's 8 am, if it's not a weekend or holiday, make coffee." You can program a machine with more and more rules, and refinements of the rules it already has.
However, a naturally-occurring machine (animal) can, within limits, program itself. There is no true self-programming (learning) in artificial machines. Neural networks supposedly learn, but it's very limited.
The Real PC
Thursday, October 9, 2003
In other words, the human machine is able to get outside of itself, somehow. This probably has something to do with Godel's theorem.
The Real PC
Thursday, October 9, 2003
It could be that man-made computers are not powerful enough to emulatea human brain which does not preclude the possibility that one day they will be able to. Of course, then a computer can make all the same mistakes we do.
John Ridout
Thursday, October 9, 2003
> Stronger and faster is possible, more intelligent is not.
Yes, and no one will ever run a 4 minute mile, and if God intended man to fly he'd have given us wings.
I'm sorry, but there is no such thing as "never". Just because it doesn't seem theoretically possible doesn't mean there isn't a way. Research into how the brain works will improve how AI can be built. It isn't easy, and using traditional AI techniques it is theoretically impossible, but that doesn't mean there are not new techniques to come.
I don't believe in aliens, but I run Seti@Home because I think that there is a possibility of other life beyond our own. You shouldn't limit yourself by what YOU think isn't possible given the current state of technology, physics or techniques.
Tim Sullivan
Thursday, October 9, 2003
Real PC - If you really want to consider this topic in a meaningful way, I suggest you go do some heavy reading in philosophy of science and philosophy of mind (you might start with Godel, Escher, Bach on mind and computation, and then something by Daniel Dennett (sp?) on free will).
Exception guy
Thursday, October 9, 2003
[Please define what you mean by 'think' and 'intelligent'. Any discussion is worthless without a common platform.]
Ahh... This is actually one of the central problems with AI -- *nobody* has a definition. It certainly hampers scientific study when your problem is undefined!
We may not be able to define it, but we *can* study the characteristics of intelligence (in other words, glean insight into the cause by studying the symptoms). There is a book called "Creative Concepts and Fluid Analogies" by Douglas Hofstadter (author of "Goedel, Escher, Bach") in which he discusses research he has done in this area. As the title suggests, Hofstadter feels that one of the central characteristics of human intelligence is the ability to use analogies -- that is, apply the concepts we know about one thing to something else.
Another insight, pointed out by Marvin Minsky in his excellent book "Society of Mind", is that intelligence means nothing if there is no goal to apply it to. I think we have all notice this in our work -- it is a lot easier to code when you have strict design goals and known requirements than if you aren't sure of 'where you're headed', so to speak.
Contrast this to Ray Kurzweil ( http://www.kurzweilai.net/ ), who foolishly believes that we will keep creating smarter and smarter machines, until one day they are smart enough to be able to create super-duper smart machines of their own. That point in time will be the 'singularity' -- the next step in evolution. Well, the problem with that theory is that Kurzweil (and many others) are looking at intelligence as some number on a one-dimensional scale. We keep incrementally nudging up the scale until *BAM* -- intelligence squared!
Well, as we all know from research on IQ tests (and the SAT's for that matter), there is no one metric that defines intelligence. Intelligence is a balance of many aspects of the mind -- thinking, empathy, relationship with the body, etc.
I for one believe that we will create machines that will be able to solve a variety of problems in a dynamic fashion. But will it be intelligence? I don't know (it certainly won't be 'consciousness'). And of course this brings us back full-circle, because implicit in that statement is my own definition of 'intelligence', which I started out by saying was not defined!
So yeah, the study of AI has as much to do with the definition of the problem ("What is intelligence?") as it does with solving the problem itself (building AI).
Jordan Lev
Thursday, October 9, 2003
As this discussion is "out-there", I'll continue the drift: I don't know that your are "conscious" or "intelligent" except by your actions that I perceive. I don't know how we can say that a machine is not conscious since I don't know how to prove this in a person. Perhaps intelligence and consciousness are on a sliding scale, with a calculator near the bottom, a good AI program up the scale somewhat, and a person near the top. They all have outputs that I can perceive and vary those outputs based on inputs and internal workings. Using that metric, machines may get more and more conscious and intelligent over time ( based on their outputs that I'm perceiving ). A machine may never act like a person because its internal workings ( including the senses, hormones, etc. ) will be different, but it may be "more" intelligent, based on some scale.
Barry Sperling
Thursday, October 9, 2003
>But there is no machine that can think better than >humans (or most other animals) and there might never be.
Be aware that the goalposts for A.I. keep shifting - 50 years ago, a machine that played chess would have been considered A.I....not anymore. - 20 years ago Speech Recognition would have been considered A.I....not anymore. There's plenty more examples. The point is that we are gradually creating more and more AI machines - just that as we create them we redefine what intelligence is and what AI is.
Knowledge Maker
Thursday, October 9, 2003
When I say AI isn't possible I mean within the current scientific framework. Science and technology make terrific progress with some kinds of problems and meet eternal roadblocks with others. I think there is no limit to the progress science and technology can make, even within the materialist framework. However, the progress will only be in certain kinds of domains. If/when science breaks free of certain limitations, then some of the roadblocks may be removed.
I have read books by Hofstadter, Minksy, etc.., and they have some great ideas. However their thinking is confined by certain assumptions that are currently blocking all progress in certain areas.
The Real PC
Thursday, October 9, 2003
For example, understanding Mind in terms of quantum physics.
The Real PC
Thursday, October 9, 2003
well you hit the nail on the head by mentioning materialism. if mental processes arise from physical processes, there is no _logical_ reason that a robot can't be built that is identical in intelligence to a human. obviously there is a lot of work to be done to discover/replicate those processes, but it is only a matter of time and effort.
rz
Thursday, October 9, 2003
I'd have to guess that there's a significant quantum element to 'intelligence', 'consciousness' or 'mind'. Maybe quantum computers will have a better chance of solving AI problems not yet solved.
A little off-topic...but on a quantum groove....if there are an infinite number universes where every possibility is explored...does that mean there's a universe in which one never dies? And is that the one you might be living in?
Knowledge Maker
Thursday, October 9, 2003
I think it was Donald Michie who said that there are two things wrong with AI.
The words Artificial and Intelligence.
And if he didn't, I just did.
Simon Lucy
Thursday, October 9, 2003
actually all of biology needs a serious kick in the pants when it comes to physics. most molecular biologists are basing their models on a view of atomic structure that dates back to democritus. it isn't apparent that this is necessary to understanding how the mind works, but it couldn't hurt. it is hard to get anyone to work on this right now, because all of the money in biology comes from pharmaceutical corporations, who seem to be doing just fine with the current way of thinking. AI/Cognitive science has been pretty dead (from a funding point of view) for the past 15 years, or more.
rz
Thursday, October 9, 2003
[if mental processes arise from physical processes..]
"Physical" has been a meaningless word for a long time, because of advances in physics. What is "physcial" about a field, for example?
There are physicists studying how the mind works in terms of quantum physics. However, most AI researchers and cognitive scientists probably know nothing about this.
The Real PC
Thursday, October 9, 2003
I'm of the opinion that the only way to create a truly intelligent being is by accident.
But that's more a personal skewed notion of a soul than anything else.
Flamebait Sr.
Thursday, October 9, 2003
The problem with computers thinking like humans is that computers are completely logical, and humans are emotional and are often wrong.
When a machine adds 1 and 1 and always gets 2, it can never think like a human.
If the machine adds 1 and 1, knows the answer is 2, but using some sort of randomness to determine that this time it should report 3, the machine is still not thinking like a human.
Elephant
Thursday, October 9, 2003
I thought intelligece has always been defined as "that which I know and you do not know".
Devil's Advocate
Thursday, October 9, 2003
[The problem with computers thinking like humans is that computers are completely logical, and humans are emotional and are often wrong.]
No, that is definitely not the problem. Emotions are logical and are often correct. Computers lack emotions simply because they aren't programmed to have them.
A computer's problem, as I said, is that it is a logical system that has no way to get outside itself, except through its programmers.
The Real PC
Thursday, October 9, 2003
"The Real PC": When you say "This probably has something to do with Goedel's theorem", you probably want to investigate an argument due to the philosopher J R Lucas. It goes roughly like this: "A computer is basically just a big formal system, with the things it can know being theorems of the system. If it's intelligent enough to be interesting, then it can do mathematics, so the system is suitable for applying Goedel's theorem to. Therefore there's something it can't know -- *and we can see that*. Therefore we're cleverer than it is."
Fortunately or unfortunately, depending on how you look at it, this plausible-ish argument is full of holes. This was first pointed out by another philosopher, Hilary Putnam, *before* Lucas ever published his argument. More details are, e.g., at http://homepage.ntlworld.com/g.mccaughan/g/remarks/lucas.html
(disclaimer: I wrote that, so my opinion that it's a nice clear demolition of the argument may be biased).
Gareth McCaughan
Thursday, October 9, 2003
"Emotions are logical and are often correct."
Logical? Is it logical that when I stub my toe on the couch that I run around swearing? Is it logical that when a parent dies you cry? Do the emotional reactions to pain and grief improve a situation? Sure, they make sense to us as humans. Logically, they have no basis as it does not lead to a solution, as there is no solution.
Look at the movie wargames. In the trivial example, the computer plays itself in tic-tac-toe and arrives at the logical solution that you can't win, so why bother playing, and it shuts itself off. The movie likens it to some sort of AI which is completely off base. As a human, I know you can't win tic-tac-toe, yet I still play knowing that at some point, my human opponent will accidentally make a mistake.
In a game where all outcomes can be perceived like tic-tac-toe I know not to play the computer, as it will never make a mistake. I suppose that's why you don't see too many tic-tac-toe games on the shelves.
The fundamental hardware design of hardware relies on binary data, 1's and 0's, right and wrong. It doesn't see shades of grey, and it doesn't see the forest for the trees.
When you say that the computer can't get outside itself, I believe you are correct, but it's because of a fundamental hardware design flaw, not a flaw in a software process.
Elephant
Thursday, October 9, 2003
"The problem with computers thinking like humans is that computers are completely logical, and humans are emotional and are often wrong"
Just becuase computers are programmed using logic, doesn't mean they can't be wrong. Look at spam filters- sometimes they're right, sometimes they're wrong, even if they are 100% bug-free. In general, a person can identify spam better than a computer. This is essentially because humans have "real-world" knowedge, where as spam filters generally do not. The fact that software is "completely logical" is not what is holding back the field of AI.
Ken
Thursday, October 9, 2003
We are programmed with emotions because they permit us to survive and to form social groups.
The Real PC
Thursday, October 9, 2003
I don't know what is meant by "computers can't get outside themselves; they can't change their programming"
Humans can't change their (genetic) programming either, but they can still learn. How can you tell if an automaton is learning? You see if it changes its response to the same stimulus over time. If by doing so it is approaching some goal, then it's learning.
Hell, the stupid paperclip can do that much. By the time you click close on it three times, it gets the hint and doesn't come up any more. Now, of course, it's hard-wired to just that one certain acticity. But in general it could have a rule that says, "anything that I do wrong three times, change my behavior". And if it was any more sophisticated, it could have a camera that looks at the user, looks for signs of irritation, and then modifies its behaviour appropriately ... wouldn't that be learning?
After all, it's the same basic algorithim that humans are programmed with.
Alyosha`
Thursday, October 9, 2003
This thread is an excellent example of why no one should be allowed to program without a firm basis in mathematics and philosophy.
anon
Thursday, October 9, 2003
Well, I don't know if they shouldn't be allowed to program, but they definitely shouldn't be allowed to write philosophy, even if it is just a discussion forum.
It's sad when a group of supposedly smart people formulate arguments to positions which they only have a brief acquaintance with. And don't try to tell me you're learning. Arguing with equally clueless people isn't a good mode of learning.
I studied this for 4 years in college, and still wouldn't venture my own hypothesis. I realize how little there is that I really know.
Kant ain't a word
Thursday, October 9, 2003
I studied this much longer than 4 years. I can't write detailed arguments here, because obviously there isn't enough time.
People who have been indoctrinated into a particular set of ideas often enjoy feeling superior to those who think for themselves.
The Real PC
Thursday, October 9, 2003
But too often people mistake not being familiar with the literature with 'thinking for themselves'.
Kant ain't a word
Thursday, October 9, 2003
Michie's main interest in the 80's (I don't know if he revised that later), was whether a computer (however you define that, as hardware, software or the combination), could create new knowledge.
From all the things which are already known could some new knowledge be synthesised in software? He doubted it.
At the time I thought this shortsighted but then again we develop new knowledge for ourselves by interacting with the environment and that interaction involves hypothesising.
The major evolutionary leap to intelligence involves tying up a large amount of the available brain tissue with simulating future possibilities, how another entity feels, what they might do, what steps could be taken to avoid this or that branch of behaviour and so on.
This requires considerable energy, a lot of not moving, so a stable and mutually supportive society is needed as well.
Computers, software AI, whatever, get their information from very restricted perceptual channels and it would be very awkward for it to acquire new channels. It might be able to generate hypotheses but it could only test its simulations against dead perceptual realities.
Look at the amount of brute force specialisation programs like Big Blue need in order to solve the problem of playing chess. Even the feeblest human player of chess could translate the problem space of chess into a political game with people, and manage to cross the road, looking both ways and remembering the Green Cross Code.
As Michie said, Artificial Intelligence is meaningless, Machine Intelligence is more accurate but it should not be expected to be anything like Animal Intelligence.
Simon Lucy
Thursday, October 9, 2003
It seems to me that people often look at the state of AI today and conclude that actual machine intelligence could never exist. I think this is a pretty flawed conclusion for a couple reasons.
First of all, the field of AI today has very little to do with trying to make machines intelligent. The majority of AI research involves techniques of making computers *appear* intelligent by using fancy search algorithms. Most research is not working on how to actually make machines think for themselves, because that is far beyond our current level of technology. Basing any argument about the potential of AI on current AI techniques and research seems pointless, because current AI has nothing to do with making intelligent machines. Note that there may be some exceptions to this, as I'm not fully versed in current AI research
Second, how can you deny the possibility of making machines intelligent when we don't really know what intelligence is? I mean sure I can say that current technology will not make an intelligent computer with reasonable certainty, but that doesn't mean much. If you want to argue that we can never fully understand intelligence and thus can't duplicate it, or if you believe that only a higher power can grant intelligence, that's one thing. But I don't buy the "machines are completely logical" or "machines can't change their programming" lines of argument. I have seen no convincing argument that a machine could not be designed which mimics the workings of the human brain. Yes it would require greater technology than we posess today. Yes it would require us to gain an understanding of our own brains and how our brains make us intelligent. And probably, such a machine would have to grow up and learn from its experiences just as humans do.
Mike McNertney
Thursday, October 9, 2003
If intelligence were produced by brains, then it would be possible to create intelligence. You would just have to learn exactly how brains work and then create machines that work according to similar principles. This is the underlying assumption of most AI research.
However intelligence is not something that can originate in a brain. So the creation of intelligence, or at least an understanding of what intelligence is, will have to wait until the next paradigm shift.
The Real PC
Thursday, October 9, 2003
>Second, how can you deny the possibility of making
>machines intelligent when we don't really know what i
>intelligence is?
What about explorative programming (Seymond Paperts + LOGO anyone?)
- writing a program and exploring its behavior as a way
of learning about given phenomenon.
Or, to restate, since AI is not so much about writing programs, there is little chance of learning about intelligence.
The pure theoretical approach of defining intelligence has been done by Philosophers, some >2.5k years later no results have been reached.
Michael Moser
Friday, October 10, 2003
Hmmm, I'm trying to think of some other location where intelligent behaviour could arise.
The liver perhaps?
Or maybe that's what the appendix is for, hmm but I had mine removed. Perhaps that's proof enough. Intelligence is just an appendix to being human.
Simon Lucy
Friday, October 10, 2003
Real PC wrote:
However intelligence is not something that can originate in a brain.
It's not clear to me what you're basing you're basing this statement on. We certainly don't have a good understanding of how it originates in the brain, but I don't see anything that rules it out altogether.
If you mean that some kind of "spiritual" (for want of a better word) agency is needed as well as the physical level, then I'd say the evidence for that is pretty weak too.
Matt
Friday, October 10, 2003
I think there are some fine examples in this thread of what Richard Dawkins calls the "argument from personal incredulity": I can't imagine how something might be so, therefore it must be false.
I also find it quite extraordinary that a physicist as smart as Roger Penrose can take seriously the idea that the special properties of the human brain that we call "consciousness" somehow arises from quantum effects. To ascribe our most unique quality to an effect distributed throughout the whole of matter seems remarkably perverse. Daniel Dennett has a pretty sensible take on most of this stuff, and his comments on Penrose are worth reading.
RealPC: if you have been studying this subject for years, but can still come up with gibberish like: "In other words, the human machine is able to get outside of itself, somehow. This probably has something to do with Godel's theorem", all I can say is that you need to go back and study it all again, because you didn't understand it the first time around.
rz: "actually all of biology needs a serious kick in the pants when it comes to physics. most molecular biologists are basing their models on a view of atomic structure that dates back to democritus. it isn't apparent that this is necessary to understanding how the mind works, but it couldn't hurt"
It isn't apparent that it's necessary to many of the other millions of things that biologists work on either. Otherwise they would use a different model. And of course they sometimes do. The structure of DNA, for example, depends on hydrogen bonding, which isn't to be found in ancient Greek writings. You don't really imagine they proceed in this way out of ignorance, do you?
I would say, rather the opposite. It's being able to treat atoms as indivisible wholes that allows you to think about more interesting questions such as the role of homeobox genes in embryonic development. And it's spending too much time thinking about quantum physics that produces daft ideas like the Penrose conjecture.
On the subject of brains and intelligence, it seems pretty clear that in humans, the faculty of intelligence is connected to having a brain. If your brain gets damaged, your intelligence often takes a knock, or gets altered in some way. However, it doesn't follow that a brain is the only way that intelligence can be produced. Most of you are doubtless familiar with Conway's Game of Life. This is most easily implemented on a computer, but can also be done using pencil and paper, or even just a sandpit. In the same way, it is possible that intelligence or even consciousness, can be implemented on some other platform.
There is also the need for a body. Most people can imagine themselves as a disembodied head, but if you then remove all the sensory organs, it's hard to say whether what you have left is truly sentient. As Simon Lucy hinted, this is one of the hardest problems with trying to construct an intelligence: it's likely that our type of intelligence depends critically on the wealth of our experience, from simple perceptions to the culture by which we pass the results of our learning to subsequent generations. Assuming that we can build a structure of sufficient complexity to act as a platform for intelligence, how are we to supply it with a rich enough experience to feed it? This POV would suggest that robotics is more likely to produce intelligent entities than software-based research.
Dave Hallett
Friday, October 10, 2003
If the brain is damaged, intelligence and various abilities may be affected. Using our intelligence in the world of our sensory perceptions requires a brain.
The fact that intelligence does not originate in the brain, and that the mind can act independently of the brain, can be seen in the results of parapsychology research, as well as in the experiences of countless people in all times and places throughout history.
Do not discount the parapsychology evidence if your only knowledge of it comes from fanatical close-minded "skeptics" such as James Randi.
The Real PC
Friday, October 10, 2003
All right, I'll bite. What sort of parapsychology result would suggest to you that the mind operates independently of the brain. Presumably we're not talking about spoon-bending, UFO sightings or poltergeist activity.
And yes, I agree with you that "skeptics" are usually remarkably unsceptical about their own world view. On the other hand, you *are* coming dangerously close to sounding like a nutter...
Dave Hallett
Friday, October 10, 2003
Real PC,
>> The fact that intelligence does not originate in the brain, and that the mind can act independently of the brain, can be seen in the results of parapsychology research, as well as in the experiences of countless people in all times and places throughout history. <<
Oh boy! What's the weather like on your planet?
Mark Pearce
Friday, October 10, 2003
Real PC,
>> If intelligence were produced by brains, then it would be possible to create intelligence. <<
This is a complete non-sequitur.
>> This is the underlying assumption of most AI research. <<
No, it's not.
>> However intelligence is not something that can originate in a brain. <<
The wheel may be turning, but the hamster is stone dead.
Mark Pearce
Friday, October 10, 2003
[What sort of parapsychology result would suggest to you that the mind operates independently of the brain]
One example, out of many, is the engineering anomalies research at Princeton (PEAR). They have shown (many times over) that a subject's consciousness can influence the output of a (truly random) random number generator. I think the most convincing experiments are the ones using non-human subjects.
The parapsychology literature is vast and the evidence is conclusively in favor of ESP and psychokinesis.
There are lots of fake psychics, and lots of crazed believers in ESP, etc. On the other hand, there are plenty of crazed "skeptics" who are more concerned with upholding their world-view than with promoting a quest for truth.
There have been low quality parapsychology experiments, just as there is some low quality research in any field. There is also plenty of extremely high quality parapsychology research -- trying to please the skeptics has motivated parapsychologists to go beyond what would be considered acceptable in other fields.
But they can't ever please the skeptics. Robert Park, in Voodoo Science, dismisses parapsychology because it relies on statistics.
The Real PC
Friday, October 10, 2003
I can't help thinking that some are confusing two different but connected phenomena, consciousness and intelligence.
The only kind of definition we have of intelligence is an entity that exhibits intelligent behaviour. Which is pretty much useless.
There are quite a number of entities around though that have consciousness, knowledge of identity, self but whose record in evincing intelligent behaviour is slight. This does not make them any less conscious than any other entity. Indeed, such entities often show an all too strong sense of self and identity.
Simon Lucy
Friday, October 10, 2003
The work at PEAR is open to criticism:
for example http://skepdic.com/pear.html
or http://www.tricksterbook.com/ArticlesOnline/PEARCritique.htm
Randi incidentally is not in the least a fanatic; he is simply a professional magician who has spent the last couple of decades looking at para-psychology experiments and finding that they either didn't work when he was around or were the result of fraud.
Since he has offered a considerable sum of money to anyone who can conduct a para-psychology expereiment that proves ESP while he is present, and has yet to pay out, I reckon your attacks on him are a little unfair.
Stephen Jones
Friday, October 10, 2003
Stephen,
It's valid criticism if you accept that the PEAR scientists are fools.
Try applyiing your skepticism to Randi's contest -- he selects which applicants he is willing to test.
The Real PC
Friday, October 10, 2003
If we (for convenience) accept the possibility that people may be able to influence random number generators, how does this suggest that intelligence in humans does not arise from their possession of a brain?
Dave Hallett
Friday, October 10, 2003
Stephen,
I love the skepdic criticism. The results might be due to any of the following:
deliberate fraud or cheating
errors in calibration
unconscious cheating
errors in calculation
software errors
self-deception
But they don't provide evidence to back up their accusations!
Research scientists spend all their time trying to elminate the problems listed. No research is perfect, so the accusation would apply equally well to all scientific research.
Why don't we just forget about science altogether?
The Real PC
Friday, October 10, 2003
Man, this was a great thread until it got sidetracked ... and now I feel obligated to step up and beat the dead horse some more.
In the PEAR studies, there are a whole number of possible causes:
1. deliberate fraud or cheating
2. errors in calibration
3. unconscious cheating
4. errors in calculation
5. software errors
6. self-deception
7. magical mind over matter: thinking something makes it so.
Now, there's no more evidence for #7 than there is for #1-#6, but somehow it's the first explanation The Real PC jumps to, while ridiculing skeptics for dwelling on the more prosaic #1-6.
Alyosha`
Friday, October 10, 2003
Is it fair to accuse someone of these things without citing a shred of evidence? Is it reasonable to accuse someone of stupidity or fraud without understanding the research they have been doing for decades?
The Real PC
Friday, October 10, 2003
>>Roger Penrose can take seriously the idea that the special properties of the human brain that we call "consciousness" somehow arises from quantum effects. To ascribe our most unique quality to an effect distributed throughout the whole of matter seems remarkably perverse.<<
I'm not sure what Penrose has to say about it, but I don't see how you'd associate quantum effects with only the human brain and not with the brain of a chimpanzee or even the brain of an insect.
Knowledge Maker
Friday, October 10, 2003
I'm not accusing anyone of anything; I'm merely pointing out that Jahn has not provided any more evidence for hypothesis 7 than hypotheses 1-6.
He certainly has not shown that his system is 99.99% error-free, which is the effect he argues he is measuring.
Oh, and has it been duplicated elsewhere? To my knowledge (and Martin Gardner's), no.
Should be pretty easy to duplicate, no? Stick a webform with three buttons: "what bias do you want to create: high, low, none". Then run off, grab 200 digits from random.org, store those in a SQL database ...
Alyosha`
Friday, October 10, 2003
Yes, various labs are using random number generators and replicating Jahn's effect.
As for fraud and bias -- any competent researcher makes every effort to avoid being fooling and being fooled. There are many techniques for controlling various factors that can lead to bias or error, and all good researchers know about them.
The Real PC
Friday, October 10, 2003
If we (for convenience) accept the possibility that people may be able to influence random number generators, how does this suggest that intelligence in humans does not arise from their possession of a brain?
Dave Hallett
Saturday, October 11, 2003
It suggests that the brain is a machine used by the mind. AI researchers generally assume the brain and the mind are identical. They try to build machines that work like the brain in order to simulate, and understand, intelligence.
If the brain is a machine used by the mind to interact in our "physical" world, then we have no idea what kind of machine the mind is, or how to build one.
If AI researchers have a fundamentally wrong idea about mind and brain, no real progress will occur.
I think progress in understanding mind and brain is occurring because of research on the level of quantum physics. Josephson, for example.
The Real PC
Saturday, October 11, 2003
Sorry for butting in, I only skimmed this thread, but I have a few observations.
1. If the mind is not part and parcel of the brain, and is something other, i.e. something that "controls" the brain or influences it from outside, and nobody has verified this, then all we're doing when we talk about ourselves is speculating, nevermind AI.
2. If you show something like The Sims, or Eliza, computer animated movies, to certain kinds of people - people unfamiliar with technology, or children, they might think it's "real people." At what point does something like The Sims become so real it's almost indistinguishable from the real thing? Something along the lines of the Holodeck, and a real Turing machine?
3. Currently computers are limited in how they interact with us, I don't know of any computer or program that has the lexicon, vocabulary, and "understanding" of language to communicate effectively as we do. So we limit the realm in which we interact with a computer - video games, chess, etc. A closed system. Within a closed system, a computer can emulate to some degree actions that would seem human. A chess version of a Turing machine could easily fool someone.
4. The math game Life is the basis for all the Sim games from Maxis. Sim City is really Life on multiple levels, and with more complex rules.
5. We talk about humans being logical or illogical vs. computers. We say computers are logical because we know how they're programmed. To use the example of The Sims again, they must have an underlying logic, but if you don't know what it is, it seems they act on whims. Is it possible that humans too have an underlying logic - much more complex or subtle than we can conceive?
5. Back to the original post "Stronger and faster is possible, more intelligent is not." It depends perhaps on your definition of intelligence. Much of intuition is often a powerful pattern recognition database - something feels strange because it doesn't quite match your experience. Again, within the realm of a closed system, a computer could "intuit" your next move based on your past behaviours.
Much of the fear of AI, on whatever level, comes from our belief that one day a computer may decide our fate, and we can't make an emotional plea to a computer, or even point out a flaw in it's logic. Sure it will probably be a human who puts a computer in that position, but one day, let's say performance records will be computerized and who keeps their job will be based on some algorithm, or your medical history, and who gets a heart transplant another algorithm.
At this point the computer may have the "knowledge" and make the same decisions as an expert in the field - it's possible probably to do something like this today - but you can't tell the computer about all your hopes and dreams (unless somehow this has been turned into a variable - personality.hopes).
I now return you to your regularly scheduled debate on the nature of "the mind."
www.MarkTAW.com
Sunday, October 12, 2003
I fail to see why ESP experiments suggest that "the brain is a machine used by the mind". Since no-one seems to have much in the way of ideas as *how* humans might influence random number generators, it follows that if the effects are real, there are no immediate implications, other than the obvious one that we need to know more about this.
What do you imagine the mind uses the brain for? Thinking, perhaps? To put it another way, what do you think the mind can't do that it needs a brain to accomplish?
Dave Hallett
Sunday, October 12, 2003
The mind uses the brain to interact with what we call the "physical" world. The brain has sensory areas that process information about visible light, for example, so the mind can build a model of the visible world. The brain has motor areas that the mind uses to control the voluntary muscles to act on the physical world.
As for ideas about how the mind and brain work, there is no shortage. But you have to venture into "new science." There is a strong bias among AI researchers against the idea of a mind that can act separately from a brain, so you might not hear of these ideas in AI courses. AI researchers would rather think the mind equals the brain. If it doesn't, AI research has to delve into quantum physics, which would complicate things for the non-physicist AI researcher.
The Real PC
Sunday, October 12, 2003
[Is it possible that humans too have an underlying logic - much more complex or subtle than we can conceive?]
Of course we have an underlying logic!!
The Real PC
Sunday, October 12, 2003
You seem to be saying that the brain in your opinion forms an interface between the mind (which I think you're implying is "non-physical" in nature) and the "physical" world.
(Your use of quotes seems to suggest that you think this distinction is something of an artificial construction, but either we're going to speak in these terms or not - you can't eat your cake *and* reserve judgement as to whether the chocolate frosting is real.)
Anyway, this idea is not without merit: it's difficult to tell the difference between a PC and a TV purely by inspecting the screen. Similarly, it's possible that our brains are simply devices for picking up the "mindness" that pervades the universe, and seem to grant us special intelligence simply because they do so more effectively than the brains of say, dormice. If this view is correct, we will learn no more about the nature of mind by examining brains than we will learn about episodes of Friends from dismantling a TV set.
The biggest problem with this idea is that it's not disprovable. No matter how much evidence you gather about the brain, no matter how completely you account for how it can produce the behaviour we wish to account for, it is always going to remain possible that there is an intangible entity pulling the strings behind the scenes. It's basically the same as postulating a role for God in physics. As evidence accumulates, the role for God cannot be made to vanish, only to get smaller and smaller. So basically, the reason AI researchers don't proceed on this basis is down to Occam's razor, and the fact that the scientific method usually depends on falsifiability (although that is somewhat disputable - the theory of evolution is difficult to falsify, for example).
If you think this argument is mistaken, you are welcome to propose an experiment which if confirmed, would rule out the possibility of mind existing independently of brain. I think you'll find it tricky.
Finally, I see absolutely no reason why quantum mechanics should be dragged into this picture. Other than the fact that QM consists of a bunch of mathematical complexities whose real-world interpretation is highly debatable, thus offering a sort of Rorschach blot test in which people with an axe to grind can generally find something that fits their purpose, I can't see any obvious connection with the subject matter. William of Occam is again going to the barbers, I fear.
Dave Hallett
Sunday, October 12, 2003
Dave
Other than the following quibble, I agree with you entirely.
The principal of falsifiability has nothing to do with how difficult it is to demonstrate that something is false. It is governed solely by whether or not it is in principal possible to demontrate that it is false.
Theories that depend on a 'GOD' principle such as your universal mind example are in principle unfalsifiable and so unscientific. Evolution may or may not be be difficult to falsify (and I don't actually think it is any harder than any other established scientific theory) but it is in principle possible.
Also, just because a theory is unscientific does not mean that it cannot be true but the burden is certainly on its proponents to demonstrate why we should believe it. That would either involve reformulating the theory in a scientific manner and demonstrating its validity or launching a massively successful religious movement (this is generally the more common approach).
Stephen Martin
Sunday, October 12, 2003
Theories that cannot be falsified are not unscientific - they are ascientific.
The principle of falsifiablity only works for the physical sciences. Biology and the Social Sciences, which deal with much more complex interactions normally work on the principle of accumulation of evidence.
Stephen Jones
Sunday, October 12, 2003
Two Stephens, how inconvenient.
S1:
I agree with the quibble. It is certainly possible in principle to find fossil evidence that would throw up problems for evolutionary theory in its most strongly stated form. What I meant to say (and I wasn't very clear) is that as a theory it's not very amenable to *experimental* testing. It's hard to demonstrate selective pressures, difficult to show competition either between or within species, and the issue of controls is likewise thorny. Nonetheless, it has the merit of not requiring an invisible deity doing the hard work, and as such as makes a good null hypothesis.
S2:
I in turn must quibble with the comment about biological sciences in general. In molecular biology, for example, most experiments will give a clear yes/no answer without even the need to apply statistics, let alone resort to "accumulation of evidence". A far cry from some aspects of particle physics, where experimenters laboriously adjust their apparatus until it gives the "correct" value for the "constant" they are measuring, and then claim to have measured its nth decimal place more accurately than ever before!
However, there are some parts of biology which are just as you say. Ecology, for example, does not produce tidy testable hypotheses all the time. But it is still science.
Dave Hallett
Monday, October 13, 2003
Recent Topics
Fog Creek Home
|