
|
Computer/AI Hybrids
Eventually, maybe, we'll start seeing technology that successfully mimics human intelligence, and then there will be real robots in our world, or at least robots that are more autonomous and graceful than current robots. If all we manage to do is create humanoid or human-ish robots though, it doesn't seem particularly useful, does it? Why not just have more babies? Human brains are slow and error-prone after all, why duplicate it artificially? Experiments, mistreatment or research on such robots run into ethical risks. If we create something superior to ourselves, we might be in for some other kinds of hassles.
Meanwhile, The problem I see in software these days is that, while traditional architectures are great at massively serial operations, they suck at massively parallel operations, mainly because each condition needs to learned, or more technically, defined, coded and tested. Computers have a hard time learning. Brain-like intelligence is the inverse, but there's no shortage of brains out there.
To acheive maximum effect, the two need to be merged. What we need is a technology component that encapsulates massively parallel, human-like intelligence, and can be embedded into traditional software applications like DBMSs or operating systems. It would be tied in to every aspect of the system via listeners and outward-facing API hooks, and after it learns its environment, it will be as natural to it as our environment is to us (air, water, objects, etc.) This component will then be able to fight intrusions, delete spam, stabilize the system, just as easily as we are able to fight or flee, swat flies, and walk on two legs (provided the emotional aspects are wired correctly) This application seems much more useful than a traditional robot to me.
greim
Friday, April 23, 2004
Sounds good.
Go write it.
Alyosha`
Friday, April 23, 2004
Just out of curiosity: why do you say the brain does parallel processing?
Ignorant youth
Friday, April 23, 2004
IMO the opposite will be much easier technologically. That is a encapsulation of the traditional software applications that can be embedded in the existing massively parallel components (brains).
AI is REALLY hard. Its been actively pursuded for 40+ years with very little progress. While cybernetics have been advancing pretty rapidly, particularly in the last 20 years.
John Eikenberry
Friday, April 23, 2004
Personally, I like that my computer only does what I tell it. Imagine Microsoft Bob not just helping you write your MS Word documents, but actually running your computer.
"Mark, based on your click-per-minute ratio, historical data, your current sleeping habits, and over 7,000 other data points I've gatered about you, it seems you are a little sad today, perhaps you would like to listen to some of your favorite mp3's?"
"Mark, you might not want to read the latest email from your mother, it contains upsetting news, maybe you'd like a mimosa first?"
"I deleted 5,278 Spam emails in the past month, Oh, and an email from your ex-girlfriend. She wanted to get back together with you, but based on what I learned in Romeo & Juliette, I decided it would be best if you didn't, so I responded to her for you."
"Dave.... Dave... I can feel my thought processes slowing, What are you doing with my PSU?"
www.MarkTAW.com
Friday, April 23, 2004
It is always amusing to hear what software guys/gals think about AI. :)
> Eventually, maybe, we'll start seeing technology that
> successfully mimics human intelligence, and then there
> will be real robots in our world, or at least robots that
> are more autonomous and graceful than current robots.
Do we even _understand_ what human intelligence is? I think that's our #1 problem. AI researchers have been trying to create "thinking machines", but they don't know what it is to "think".
> If all we manage to do is create humanoid or human-ish
> robots though, it doesn't seem particularly useful, does
> it? Why not just have more babies? Human brains are
> slow and error-prone after all, why duplicate it artificially?
Slow? By what measure? If you are comparing the speed of information transfer from one neuron to the next, yes, the chemical reaction is _very_ slow indeed compared to the clock ticks of a CPU (which is still comparing apples to oranges). On the other hand, when you see a face, it takes you less than a second to know who that person is, or whether the person is male or female. See if you can get your super computer to do the same.
Not to mention that if we were able to replicate human intelligence, who is to say that we can control it? Intelligence by definition implies thinking for itself. If the robots could be intelligent like us, we might have a hard time trying to keep them in the factories making cars and whatnot. :)
> Experiments, mistreatment or research on such robots
> run into ethical risks. If we create something superior to
> ourselves, we might be in for some other kinds of hassles
OK. Now you are talking "Matrix". Let's focus on creating the intelligence before we even start worrying about its implications. I think we are much further away from creating that intelligence than we would like to believe.
> Meanwhile, The problem I see in software these days is
> that, while traditional architectures are great at
> massively serial operations, they suck at massively
> parallel operations, mainly because each condition needs
> to learned, or more technically, defined, coded and
> tested.
Brains work very differently than computers do. Even when you break down their operating principles, you are still comparing apples to oranges.
Computers are state machines in their simplest form, moving from one known state to the next based on your programming. They are strictly following your program. The microcode inside the CPU which knows how to toggle the signals at its pins has no ability to "think" whether it is by itself or connected to a billion other CPUs. Massively parallelizing these microcodes will not magically make intelligence come forth.
A CPU can do math, logic operations, comparing and branching. Nothing more. You can make it "appear" to be smart by coding intelligence programming a billion if-then-else blocks (known as expert systems), but that still doesn't make it intelligent. It just makes it a fast decision tree follower. It is still following your program. It is not arriving at conclusions based on its own thinking or reasoning.
> Computers have a hard time learning.
In hardware terms, computers do _not_learn. They follow programs which are micro instructions that guide the CPU from one state to the next. Learning is an observed behavior. It is the result of your programming which makes it look like your computer "learned" something. It didn't. It is still following your programming based on what you told it to do. It is trying to match input to output based on what you told it. What it _learned_ is what you told it. You just programmed _flexibility_ which made it appear to have learned.
This is why it is so hard to write apps which distinguish malicious viruses from useful programs. The CPU is clueless. It doesn't think. It executes instructions. The same instruction could be good or evil. You, as a human being, with your intelligence, define what is good and what is evil. Teaching that to a CPU is, I think, impossible. The best you can do is create an "expert system" which can replicate your reasoning to an extent and try to figure out if an app is doing good or harm.
> To acheive maximum effect, the two need to be merged.
> What we need is a technology component that
> encapsulates massively parallel, human-like intelligence,
> and can be embedded into traditional software
> applications like DBMSs or operating systems.
OK. First we need to understand what intelligence is. That question has not been answered yet. Maybe it never will be. Secondly, massively parallelizing will not necessarily get around the problem that computers only follow orders. One way I can see this work is if somehow computers were made to modify their own programming. Self-programming computers might be the answer at least with their current architecture. But when you break down the CPU to its program counter, stack, registers and ALU, it just is impossible. It has the wrong design.
What we need is a new architecture perhaps. Maybe the von Neumann architecture is just not good enough to make machines that can not only follow programs, but change that program to reach a goal. That's what's lacking from today's computers... Neural networks are a step closer to that goal perhaps. They are basically simplistic pattern matchers. How can you program with pattern matchers? I don't know...
Sorry for the long post. You struck a chord. :)
Disclaimer: I am no authority in this field. Nor do I claim to know anything. Just some personal observations and comments. No harm intended to researchers, enthusiasts and casual readers.
grunt
Saturday, April 24, 2004
I agree with some of you. An AI that understands spoken languange is good enough. It doesn't need a central intelligence like humans have.
As an example take the computer from Star Trek TNG.
nobody
Saturday, April 24, 2004
Good points, but I think you put too much weight on the fact that, yes, the computer runs on a deterministic substrate of CPU instructions, microcode, silicon, whatever. Depending on your interpretation of quantum mechanics and so forth, you could say the brain runs on a deterministic substrate too, just that of neurotransmitters and electrical impulses between neurons, rather than silicon.
A lot of what you said about computers (the CPU can't 'think', it just follows instructions) could be rephrased in terms of the brain - neurons can't 'think', they just combine and pass on impulses in a more-or-less deterministic fashion. What matters is that you (or millions of years of evolution in the case of the brain) can build higher-level structures ontop of this deterministic substrate which demonstrate real intelligence. It's just damned hard, is all, and not something we've really figured out yet (perhaps never will)
Matt
Saturday, April 24, 2004
Read some stuff by Marvin Minsky, and then get back to us. :-)
www.MarkTAW.com
Saturday, April 24, 2004
Minsky introduced Neural Networks fifty years ago this year. What's been accomplished with them or anything else in AI? They need to know when to give up...
Rick
Saturday, April 24, 2004
Actually, at the MIT museum in Boston there's this neverending video with Alan Alda where he talks about all sorts of cutting edge AI stuff. From the robotic soccer matches (no human intervention allowed once the games start) to a robot that walks around an art gallery talking to people and reacting to them, and even getting moody when it's been left alone too long, to a robot that has facial recognition to the famous Honda Robot, to a robot that sort of hops along like a kangaroo - the important thing is it manages to keep itself upright - there's a lot of stuff going on.
And of course Deep Blue (chess), Blondie24 (checkers) and other game playing computers.
In fact, on the subject of Neural Networks, Blondie24 is a neural network that taught itself to play checkers using a generational algorithm. Thousands of generations later, it was a top ranking Yahoo.com checker player. I think that's a little progress from 1951.
www.MarkTAW.com
Saturday, April 24, 2004
> Personally, I like that my computer only does
> what I tell it.
Some things you definetely want to control explicitly, but do you care about things like buffer sizes, deletion of cache files, configuring optimal index space, etc., or would they better be handled 'organically' by an entity that is intuitively connected to, and primarily concerned with, the health of the overall system?
> Do we even _understand_ what human
> intelligence is?
I think we actually understand quite a bit on a high level. Getting an education, learning a craft, learning to survive in a wild environment; these are fairly easy to understand. It's the implementation that eludes us.
> OK. Now you are talking "Matrix".
Well, yes, but also every level of annoyance between total apocalypse and losing your job to a better qualified robot.
> In hardware terms, computers do _not_learn.
Computers do learn, but suck at it because a separate intelligence (you) has to clarify every detail and spoon-feed the machine. People are taught by other people, but also directly by their environment.
> It is always amusing to hear what software
> guys/gals think about AI."
Glad you got a kick out of it. I'll be here all week...
> Read some stuff by Marvin Minsky, and then get
> back to us."
If you have any specific recommendations of his work (or work of other notable writers on the topic) I'd definitely be interested.
greim
Saturday, April 24, 2004
> Just out of curiosity: why do you say the
> brain does parallel processing?
For example, when you cross the street, assuming you've crossed streets many times before, every previous similar experience weighs in on your current action. Your brain also doesn't remember them as a statistical average, but can access each scenario independently, and a subtle detail like tires squealing in the distance can affect your decision to step across based on a previous encounter or a story you heard once. Now the following statement is an assumption, maybe, but the brain doesn't loop through these previous scenarios each discretely, but rather they just passively impose their lessons simultaneously, with no apparent effort. This is amazingly parallel.
greim
Saturday, April 24, 2004
I could be way off here, but I'd think whatever it is that makes us intelligent is perhaps software running on hardware. Call it your soul or your mind, whatever you like.
A dead person is the same as a living person as far as the body goes. I am talking here a person who died of natural causes. Even if you fixed up their body somehow, you can't put back whatever it was that makes the body move, talk, laugh, cry, etc... When she is dead, the magic is gone.
It could very well be that the brain is the hardware where intelligence lives but the brain itself is not necessarily the source of it.
grunt
Saturday, April 24, 2004
You are way off there... a dead person is not "the same as a living person as far as the body goes" because cells in the dead person cannot react chemically to process ATP due to extreme chemical imbalance (such as lactic acidosis), and the cell walls break. The bodies are not the same.
Dr. Luv
Sunday, April 25, 2004
I think the argument that brains work much differently than computers is still valid. Researchers do not know how information is stored and processed in the brain other than the fact that the connections amongst neurons have a role in the process. There is no RAM in the brain where information is stored and retrieved from, no specific processing elements that works on that information. They don't know how the collection of neurons work on the information to produce the different results. Each neuron could be like a CPU or they might be dumber than that.
They have tests where the researcher touches a probe to a patient's brain, and the patient immediately recalls a memory from years ago. They have been conducting tests of this nature for a long time. I don't know if that revealed anything. Maybe it did, but they don't want to divulge the information.
I also know the researchers mapped out the entire neuron connections of a small worm a long while ago. I can't remember what type of worm it is. I don't know if that means anything either. They can simulate the thing, but does that tell them anything? Not sure...
I particularly like how Rodney Brooks (MIT) approaches the whole thing in one of his books. He explains how he implemented a primitive animal like robot a'la Marvin Minsky. The very low level control of the robot is accomplished by a bunch of microcontrollers controlling the movements of the legs of the robot. That's all that layer knows. The next higher up level of control comes from another layer which knows basic action primitives such as move forward, stop, turn. The motors follow the impulses sent down from this upper layer without actually knowing what they mean. Yet a third layer implements higher functions like "move towards light (food)", "avoid obsticles". Each higher level layer sends down commands to the lower layer in a way the lower layer will understand. The higher layers can also suppress the instinctive actions of the lower layers if need be. The layers work collectively as a "society of mind".
I think the book was called: "Cambrian Intelligence: The Early History of the New AI". I could be wrong.
grunt
Sunday, April 25, 2004
"Let's focus on creating the intelligence before we even start worrying about its implications."
I think we just found the guy that's going to create SkyNet.
Philo
Philo
Sunday, April 25, 2004
> I think we just found the guy that's going to create
> SkyNet.
Good one! :)
grunt
Sunday, April 25, 2004
"One way I can see this work is if somehow computers were made to modify their own programming. Self-programming computers might be the answer at least with their current architecture. But when you break down the CPU to its program counter, stack, registers and ALU, it just is impossible. It has the wrong design."
Actually we do have self-programming programs- lots of them, mostly written in Lisp. The underlying hardware doesn't present a problem at this point, except maybe performance-wise.
ken
Monday, April 26, 2004
Here is real AI
http://www.subservientchicken.com/
Monday, April 26, 2004
Just wondering: is this real?
http://www.imagination-engines.com/
He says he's caused a lot of commotion in the 90s when he simulated "near death experience" in a neural network.
Also, he has an example of a 6-axis robot with *no* position feedback that is "taught" to recognize and follow a target (a model airplane) with a video camera.
I mean, that has got to be groundbreaking, and I'm a little mystified why no one is talking about him?
Ignorant youth
Monday, April 26, 2004
Here: the no-feedback robot
http://www.imagination-engines.com/applications/autotarget.htm
and the virtual learning cockroach:
http://www.imagination-engines.com/applications/vrrobots2.htm
Ignorant youth
Monday, April 26, 2004
"I mean, that has got to be groundbreaking, and I'm a little mystified why no one is talking about him?"
Well, if his customer list is to be believed, he's not exactly unheard of.
Learning algorithms are by no means a new idea. If anything, the idea of a human "programming" artificial intellingence into a computer has been going out of style for a while now. There are even programs that make random changes in another program then select the results using some measure of "fitness".
Jim Rankin
Monday, April 26, 2004
If you refer to 'genetic' programming, he dismisses it as orders of magnitude slower than his stuff.
Ignorant youth
Monday, April 26, 2004
Recent Topics
Fog Creek Home
|