Alpha Centauri Forums
  Non-SMAC related
  Machine Intelligence

Post New Topic  Post A Reply
profile | register | prefs | faq | search

Author Topic:   Machine Intelligence
Octopus posted 06-14-99 01:24 AM ET   Click Here to See the Profile for Octopus   Click Here to Email Octopus  
The last debate we had about this degenerated into bickering, and nothing was ever resolved. Here's the big question:

Is Machine Intelligence possible? Can an artificial intelligence be created? Can an artificial intelligence be made from a deterministic computer, or must there be an element of randomness?

Tolls posted 06-14-99 05:20 AM ET     Click Here to See the Profile for Tolls  Click Here to Email Tolls     
Ooh...latest New Scientist I have has an article on growing nerve cells and wiring them up to chips...and...um...well, they don't quite know what to do with them yet, but I'm sure they'll think of something.

I'd post a link to the article, but they haven't bothered putting this one up yet.

DanS posted 06-14-99 10:58 AM ET     Click Here to See the Profile for DanS  Click Here to Email DanS     
Get out your 'I Robot' everybody...
Picker posted 06-14-99 11:11 AM ET     Click Here to See the Profile for Picker  Click Here to Email Picker     
Why not? Nothings impossible, we just don't know how to do it yet.
Provost Harrison posted 06-14-99 11:13 AM ET     Click Here to See the Profile for Provost Harrison  Click Here to Email Provost Harrison     
Seriously, it probably is. Just that no-one understands the complexity of a neuron on a single level if you read the article in New Scientist. Never mind in complicated neural networks like that in the brain.

And quite frankly it is staggering how neurons work. They do more than the sum of their chemical parts suggest. But remember, a lot of the current advances and the integration of the different areas of Biochemistry is identifying the mechanisms of individual neurons and how they work. The human genome project may give us the possibility to finish this off. Then is the different tasks to try and comprehend the functioning of circuits within the brain, and then the whole cognitive process itself. It will be slow and difficult, but yes, I believe it is possible. And I don't think it is simply a thing of randomness. It is a thing of that the brain works on no fixed response, different 'variations' in stimuli result in different 'responses', and this is what the brain has evolved to do best. And is probably a key to understanding the process (eg, if you see red, you recognise red, if you see blue, you recognise blue, but there are responses inbetween (eg, magenta and all of the colours inbetween), all generated with three colour responses of cone cells in the eye. If this is confusing, please ask, it is very difficult to explain what I am trying to say. I did mention a bit more on this in the 'Atheism' forum.

threeover posted 06-14-99 11:13 AM ET     Click Here to See the Profile for threeover    
with the current technology : NO
with the future technology : NO

We can only do so much, and we, human beings are the limit in the "equation." We can create randomness. But we'll never create a self-aware machine, even if teaches itself and makes decisions by itself, at the most basic level, there will always be a human involvment (computer code). My opinion: it is not and won't be possible.


DanS posted 06-14-99 11:43 AM ET     Click Here to See the Profile for DanS  Click Here to Email DanS     
Sure, I can virtually guarantee (with all the caveats: world destruction, human extinction, etc.) that machine intelligence will come to pass. Perhaps earlier than what we might expect at first glance.

However, this intelligence will be like a bad clone of human intelligence (and never sentience, whatever that really means). If the U.S. semiconductor industry has its say, speed will substitute for lack of sophistication. There are a couple of limitations to speed, since it is difficult to etch smaller semiconductors. There are some novel etching processes out there, so I don't expect a grinding halt to Moore's Law, only a slowing. Also heat, etc. It is assumed that superconducting will hit the mainstream, so the effects of this barrier may be limited somewhat.

The U.S. semiconductor industry is leaving the dirty work to the programmers. In this regard, there are some interesting building-blocks in the works right now. The most interesting, in my opinion, is "organic" computer programming--i.e., the program acts like a genetic organism, where possible solutions are attempted (only the solution survives).

Provost Harrison posted 06-14-99 11:58 AM ET     Click Here to See the Profile for Provost Harrison  Click Here to Email Provost Harrison     
I see your point, threeover, but it's applications will be more in improving our own abilities. Our definition of sentience, is from our own biological system. Unless, of course, we find others. But I disagree, this is not impossible. We are not governed by equations, we adapt to stimuli, and so will any such sentience.
Orbit posted 06-14-99 12:14 PM ET     Click Here to See the Profile for Orbit  Click Here to Email Orbit     
Yes I belive machine intelligence is possible. Read FAQ about the Meaning of Life.
http://singularity.posthuman.com/tmol-faq/meaningoflife.html
JohnIII posted 06-14-99 01:41 PM ET     Click Here to See the Profile for JohnIII  Click Here to Email JohnIII     
We are machines though. Just organic ones.
John III
Thue posted 06-14-99 04:13 PM ET     Click Here to See the Profile for Thue  Click Here to Email Thue     
Rup the point where the first flying-machines were build, Many physicists thougth that heavier-than-air fligth was impossible for humans.
Around 1900 Lord Kelvin believed that physics were essentially a closed field.
And then there were that philosopher who composed a lists of "things we will never know", including what stars are made of at the same time scientists were making spectographical examinations of the sun.
Unless you have definitife proof, you cannot rule out a possibility.

Never say never - and computer science even seems very open from our current point of view.

Thue posted 06-14-99 04:14 PM ET     Click Here to See the Profile for Thue  Click Here to Email Thue     
Rup? Rigth up...
Octopus posted 06-14-99 09:57 PM ET     Click Here to See the Profile for Octopus  Click Here to Email Octopus     
Provst Harrison: "(eg, if you see red, you recognise red, if you see blue, you recognise blue, but there are responses inbetween (eg, magenta and all of the colours inbetween), all generated with three colour responses of cone cells in the eye."

There's nothing amazing about color perception based on R, G, and B components. Computers can do this now, and probably much better than humans. Perhaps you are trying to say something different, and I just didn't get your example?

threeover: "My opinion: it is not and won't be possible."

Human beings already create things that are more complicated than any single individual can understand. Even if the creation of a sentient machine was beyond the capabilities of a single person, why do you say it is impossible? What problems in particular are the stumbling blocks?

"However, this intelligence will be like a bad clone of human intelligence (and never sentience, whatever that really means)."

Why not sentience?

"If the U.S. semiconductor industry has its say, speed will substitute for lack of sophistication."

Brute force is my favorite algorithm.

Provost: "We are not governed by equations"

Sure we are. All physical laws can be expressed in terms of equations. Are you suggesting that human thought somehow violates the laws of physics?

JohnIII: "We are machines though. Just organic ones."

Is the organic component necessary? Is a random (and/or quantum) component necessary, or can a deterministic computer program be considered sentient/intelligent?

Kefaed posted 06-14-99 10:06 PM ET     Click Here to See the Profile for Kefaed  Click Here to Email Kefaed     
Machine intelligence, imho, is possible, however, we are pretty far from getting there. First off, we need to fully understand the human brain in order to create a rudimentary polymorphic AI that functions like it. No computer in existence, IIRC, has the interconnectivity of a human mind, and this interconnectivity is integral to intelligence.
Dark Nexus posted 06-14-99 10:15 PM ET     Click Here to See the Profile for Dark Nexus  Click Here to Email Dark Nexus     
threeover - Still possible, even by your deffinition. All that needs to be done is to create a program that can make changes to it's own code, and even create new chunks of code. Teaching and making decisions on it's own is one thing, but when we teach it how to GROW..... then it can be dangerous.

Octopus posted 06-14-99 10:32 PM ET     Click Here to See the Profile for Octopus  Click Here to Email Octopus     
"No computer in existence, IIRC, has the interconnectivity of a human mind, and this interconnectivity is integral to intelligence."

In general, a parallel algorithm can be completely replicated by a serial algorithm, although it would takes longer (or require a faster computer). The hard part is going the other way (making a serial algorithm parallel), which is why multi-processor machines are so rare in the consumer market -- the software doesn't really take advantage of it.

JohnIII posted 06-15-99 01:14 PM ET     Click Here to See the Profile for JohnIII  Click Here to Email JohnIII     
"Is the organic component necessary? Is a random (and/or quantum) component necessary,
"

I don't think so, but the point is that most people associate machines with wood and/or metal, but we are machines and are flexh, blood, water, etc.

"or can a deterministic computer program be considered sentient/intelligent?"
I don't know. Can it pass the Turing Test (cue Zekkei)

John III

OhWell posted 06-15-99 01:40 PM ET     Click Here to See the Profile for OhWell    
Many of those needs and wants derive from the fact that human intelligence �lives� in an organic body. So, what would a machine intelligence want or need? A machine intelligence would not have the instincts that go along with an organic body so it�s actions may not even be recognizable to humans as the actions of a �living� being. Fear is a powerful motivation to humans with the fear of personal death being among the strongest. A machine intelligence would probably not fear death.

Humans perceive the ability to communicate as being one of the major requirements for intelligence. A machine intelligence may not even consider humans to be �intelligent�, let alone want to communicate with them. A machine intelligence would, likely, operate on a vastly different time scale than humans. That would make human/machine communication, or any other interaction, very difficult.

In short, humans tend to look at intelligence from a human viewpoint which is somewhat narrow. Therefore, humans might not even recognize a machine intelligence when they see it!

OhWell posted 06-15-99 01:42 PM ET     Click Here to See the Profile for OhWell    
Now for the entire message:

Once basic survival is taken care of, most human activity is devoted to filling emotional needs and wants. Many of those needs and wants derive from the fact that human intelligence �lives� in an organic body. So, what would a machine intelligence want or need? A machine intelligence would not have the instincts that go along with an organic body so it�s actions may not even be recognizable to humans as the actions of a �living� being. Fear is a powerful motivation to humans with the fear of personal death being among the strongest. A machine intelligence would probably not fear death.

Humans perceive the ability to communicate as being one of the major requirements for intelligence. A machine intelligence may not even consider humans to be �intelligent�, let alone want to communicate with them. A machine intelligence would, likely, operate on a vastly different time scale than humans. That would make human/machine communication, or any other interaction, very difficult.

In short, humans tend to look at intelligence from a human viewpoint which is somewhat narrow. Therefore, humans might not even recognize a machine intelligence when they see it!

El Presidente posted 06-15-99 03:10 PM ET     Click Here to See the Profile for El Presidente  Click Here to Email El Presidente     
we'll never create a self-aware machine, even if teaches itself and makes decisions by itself, at the most basic level, there will always be a human involvment (computer code).

What if a digital intelligence were to be evolved instead of programmed? Would you still consider an AI unintelligent if its "code" wasn't human generated?

Frodo83 posted 06-15-99 03:36 PM ET     Click Here to See the Profile for Frodo83  Click Here to Email Frodo83     
What if we tried making AI which could duplicate itself? For example, if you have AI which can create other AI, and then that can create more, human involvement is diminished. And what's to stop intelligent machines from expanding and growing? We already have computer viruses which can act just like real viruses. I think it is very possible that sometime in the future we'll have computers which can act just like real people.
OhWell posted 06-15-99 03:54 PM ET     Click Here to See the Profile for OhWell    
F83,

�For example, if you have AI which can create other AI, and then that can create more�

Actually, the concept of a computer designing a more complex computer which would design a more complex computer etc. has been around for a long time. I remember it from back in the late 50s or early 60s. I do not think that anyone has ever made it work. You would need a program that could �write� a program more complex than itself. That would require imagination. And that may be the key to human intelligence: The ability to conceive of something that does not exist and then create it!

Philip McCauley posted 06-15-99 06:43 PM ET     Click Here to See the Profile for Philip McCauley    
Why couldn't the machine simply ask,
1."What can I not do?/What do I do poorly?"
2."Is it possible?/Is it possible to do it better?"
if so,
3."Why can I not do it if it is possible?"
4."How might I redesign myself so it is possible?"
Octopus posted 06-15-99 08:49 PM ET     Click Here to See the Profile for Octopus  Click Here to Email Octopus     
Why couldn't Philip McCauley simply ask,
1."What can I not do?/What do I do poorly?"
2."Is it possible?/Is it possible to do it better?"
if so,
3."Why can I not do it if it is possible?"
4."How might I redesign myself so it is possible?"

So, when can we expect Philip McCauley 2.0?

Koshko posted 06-16-99 12:34 AM ET     Click Here to See the Profile for Koshko  Click Here to Email Koshko     
Having machine intellegence won't be good to us human.

If there is true AI, and they can build and repair themselves, what reason would there be for us to continue to exist? Humans won't be any use to self-repairing and self-replication Androids. Robots may go on, but Humans will be obsolete.

Kefaed posted 06-16-99 12:49 AM ET     Click Here to See the Profile for Kefaed  Click Here to Email Kefaed     
Hence the inherent need for the laws of robotics outlined by Asimov.
Provost Harrison posted 06-16-99 02:58 AM ET     Click Here to See the Profile for Provost Harrison  Click Here to Email Provost Harrison     
Koshko, we could use the abilities of these computers to improve our own abilities and brains. Why be made obsolete when we can 'upgrade'? And still remain distinctly human.
Tolls posted 06-16-99 04:37 AM ET     Click Here to See the Profile for Tolls  Click Here to Email Tolls     
OhWell:
Circuits have been "evolved" (I use quotes since there is a goal involved) to detect between two sounds ("stop" and "go"). The resulting circuit was nothing like anything a human would have designed. Maybe a similar mechanism (obviously on a much larger scale) could be used in the field of AI?
http://www.newscientist.co.uk/ns/971115/features.html

And for an overview of AI...
http://www.newscientist.com/nsplus/insight/ai/ai.html

jig posted 06-16-99 05:18 AM ET     Click Here to See the Profile for jig  Click Here to Email jig     
I think we have to fully understand ourselves before we go on a search for a true AI. However, I'm not saying that all efforts without complete understanding of the human system is futile. What I am saying is that if we do attempt true AIs now, even if we succeed we'll be walking blindly into new territory. We might not be prepared for whatever may happen next, especilly with all those religious groups around.

I don't understand why people think having intelligent machines will be bad for us??

quote:
Having machine intellegence won't be good to us human.
If there is true AI, and they can build and repair themselves, what reason would there be for us to continue to exist? Humans won't be any use to self-repairing and self-replication Androids. Robots may go on, but Humans will be obsolete.

What reason would there be for us NOT to continue to exist?? What reason is there now for us to continue to exist? You say humans will become obsolete, which implies that right now we somehow aren't obsolete and serve some purpose. How so? How is that purpose (whatever it is) taken away by self replicating AIs?

Or are you simply afraid that AIs will somehow try to wipe us out? If these intelligent machines are truly intelligent then on what purpose will they try to eradicate the human race?

jig

OhWell posted 06-16-99 10:45 AM ET     Click Here to See the Profile for OhWell    
Tolls,

That is interesting. The main point of the computer designing computer IIRC was that after a few �generations� we would have a computer that was so complex that no human could understand it. In that context, �computer� referred to both hardware and software. So that is basically what I was referring to. I will admit that I do not keep up much with AI progress though. It�s about all I can do to keep up with Microsoft�s �progress�.

Philip McCauley,

�Why couldn't the machine simply ask...�

Show me the simple code!

Jig,

Very good points! As to the fear that a machine intelligence will �take over� or �wipe humans out� this is because humans can create things that they can not fully control. If I were to worry about being �wiped out� I think that I would be more concerned about a artificially created germs than an artificially create machine intelligence.

Octopus posted 06-16-99 11:00 AM ET     Click Here to See the Profile for Octopus  Click Here to Email Octopus     
"The main point of the computer designing computer IIRC was that after a few �generations� we would have a computer that was so complex that no human could understand it."

We already build computers that are more complex than any single human can understand.

The problem with a computer which can improve upon itself is that it is very difficult to understand "the way you think". The question is analagous to asking you "how do you build a brain that thinks better?". It's a very difficult proposition, and likely one that no individual human could answer.

Tolls posted 06-16-99 11:12 AM ET     Click Here to See the Profile for Tolls  Click Here to Email Tolls     
That seems to be the sticking point.
It's easy to set a goal like the example I gave above with creating a circuit to identify "go" and "stop"...there is a clear and easily defined goal there. Defining "intelligence"...well, it's not that easy...where do you start?

Developing a human-like intelligence will be a side-effect of piling on other well-defined "mental" abilities onto a system...

OhWell posted 06-16-99 02:08 PM ET     Click Here to See the Profile for OhWell    
Octopus,

We already build computers that are more complex than any single human can understand�

Two very good points:

�We�... That�s one point. Humans are still designing the computers. True there are many computer aids to the design, but there isn�t a computer that designs a computer without human intervention.

The second point is that although these computers are �more complex than any single human can understand�, they still are not intelligent AFAIK.

It seems to me that �intelligence� will be a function of software more than hardware. Hardware technology has far outpaced advances in software. The computer that I am typing this on is at least 1000 times faster than the first machine that I had over 20 years ago. All of the software that I had for the first four computers that I owned would fit in the RAM of this machine! Yet it�s not intelligent. Further, the software development tools that I use today are not 1000 times better than what I had 20 years ago. And, as Tolls pointed out, it is hard to define intelligence. And, as I said, humans might not recognize non-human intelligence anyway.

Like this thread!

JohnIII posted 06-16-99 02:33 PM ET     Click Here to See the Profile for JohnIII  Click Here to Email JohnIII     
I'm all "defining intelligence"d out...
John III
Provost Harrison posted 06-16-99 03:17 PM ET     Click Here to See the Profile for Provost Harrison  Click Here to Email Provost Harrison     
Brains and computers fulfil two totally different roles, and we must not forget the vital role computers take, just a totally different one to ours. They are not sentient, but have different uses.
Octopus posted 06-17-99 12:23 AM ET     Click Here to See the Profile for Octopus  Click Here to Email Octopus     
OhWell: "Humans are still designing the computers. True there are many computer aids to the design, but there isn�t a computer that designs a computer without human intervention.

The second point is that although these computers are �more complex than any single human can understand�, they still are not intelligent AFAIK."

Exactly. The computers we design aren't even close to intelligent like us, and we can't even understand them. Therefore, it is a bigger leap to believe that a machine could be built which had the capability to improve itself easily.

The genetic algorithm approach does seem like our best bet at the moment. The hard part is coming up with parameterized hardware/software that can achieve what we want. We would have to construct a framework withing which the intelligence could work, or we'd be doomed from the beginning. The interesting thing about this approach, however, is that it would be very non-linear. If we did succeed in finding such a framework, it would likely be non-obvious to make one that was, say, 10% better.

"It seems to me that �intelligence� will be a function of software more than hardware."

I agree with that. However, some people do have a problem with the notion of a completely deterministic system having the quality of "sentience". It also means that the computer you are using right now could be intelligent if it was running the right program. Interesting concept. Would it then be murder to turn your computer off?

"Further, the software development tools that I use today are not 1000 times better than what I had 20"

An interesting thing to note here is that the modern trend in software development takes it further and further away from the capabilities of an evolutionary program. When things are written in machine code, the space of "legal" programs is very large (compared to the space of possible combinations of bits), and there would tend to be a "fitness gradient" to try to find the optimal program. However, it is exceedingly unlikely that one could easily write a object oriented program that had "evolutionary" parameters, because the space of legal programs is very small compared to the space of programs that somebody could write (I have trouble with many object oriented languages, and I presumably know what I'm doing ). Is this a problem, or am I not being clever enough in the way I'm thinking about "constructing" the candidate programs?

Provost Harrison: "They are not sentient, but have different uses."

Are you saying they can't be sentient?

Also, as far as the article that Tolls linked: it seems to me to be inherently questionably to try to exploit the analog properties of digital circuits (FPGAs in particular). That's not what they're built for, and their analog properties are probably not very reliable (as evidenced by the temperature sensitivity mentioned in the article). I'm also curious as to whether the author had any competent designers try to "beat" his evolutionary designs, to see how "optimal" they really were. I'm always skeptical of claims like "no human could have done this as well". Many academics underestimate the ingenuity of people who do real work. There are links to some papers, but I don't have the patience to read them right now.

CarniveaN posted 06-17-99 01:28 AM ET     Click Here to See the Profile for CarniveaN  Click Here to Email CarniveaN     
sorry to interupt your knwolegable and scientific debate...

well, we now have the option of fusing circuits and body cells, although we don't know what to do with it. Soon, we might be able to control a machine through this connection. What is stopping us then from getting rid of the human body, taking the brain, and hooking it up into the machine. For example build a robot dog, and put in an actual brain of a dog. does that classify as artificial inteligence? What if the brain is cloned, or otherwise 'produced'?

we now return you to your regularly schejuled forum topic...

Carny

Tolls posted 06-17-99 04:29 AM ET     Click Here to See the Profile for Tolls  Click Here to Email Tolls     
Octopus:
Surely the point (at least the point I got out of it) was that under current design theories no one would have thought to use parts that aren't even connected to the main circuit. It generated something completely unique...I believe the number of "cells" used was fewer than current designs would involve as well.

In any case, I agree that using simple digital systems aren't the best way to proceed, though digital neural nets can be impressive. Which is why the New Scientist stuff on the potential for mass produced neuron-chips is so interesting. Not necessarily from the AI point of view, but from the decision making PoV...and, possibly, allowing stuff to be wired up to us...

Octopus posted 06-17-99 10:22 AM ET     Click Here to See the Profile for Octopus  Click Here to Email Octopus     
"Surely the point (at least the point I got out of it) was that under current design theories no one would have thought to use parts that aren't even connected to the main circuit."

But that's not a good thing. This is analagous to saying "the appendix is a triumph of evolution, no human designer would ever include an appendix in a person".

"I believe the number of "cells" used was fewer than current designs would involve as well."

I'm exceedingly skeptical of that. This is the researcher's assertion, but I didn't quite follow his logic. Ring Oscillators are not that hard to design (although I'm trying to figure out if there would be any problems from the questionable nature of the analog and parasitic properties withing the FPGA), and a circuit to distinguish between 1 kHz and 10kHz should be very simple to construct, especially with a clock (which is a standard input to any digital circuit). The question of building a circuit that can distinguish between two arbitrary sounds is a different story, but I'm also not convinced that this would be very complex.

"In any case, I agree that using simple digital systems aren't the best way to proceed"

I'm in favor of large, complex digital systems. Anything that can be done with an analog system can be done with a digital system, as long as your sampling frequency sufficiently high .

Tolls posted 06-17-99 11:18 AM ET     Click Here to See the Profile for Tolls  Click Here to Email Tolls     
"But that's not a good thing"
Why? Those parts were still functional...without them the unit did not work! I'm not talking about useless parts of equipment here...
Octopus posted 06-17-99 10:18 PM ET     Click Here to See the Profile for Octopus  Click Here to Email Octopus     
"Why? Those parts were still functional...without them the unit did not work! I'm not talking about useless parts of equipment here..."

Actually, you are. From the description (at least as I understood it), these blocks had no "functional" connection to the operational circuit, but were important because of their parasitic effects, etc. An analogy might be if you were constructing a house, where a distinctive feature was that it had a giant refrigerator in the bathroom. The refrigerator in this hypothetical house isn't there because the people living in the house want access to refrigerated items in the bathroom, but because if you move it, one of the walls falls down and the house falls over. The argument then, is, "no architect would have ever designed a house like this, isn't it wonderful?". The people that need to pay for the "superfluous" refrigerator might not think so .

Tolls posted 06-18-99 04:31 AM ET     Click Here to See the Profile for Tolls  Click Here to Email Tolls     
If these items are functioning parts of the system as a whole then your analogy smacks of strawman...to me these things ARE doing something...they ARE allowing the system as a whole to work...
Octopus posted 06-18-99 10:17 AM ET     Click Here to See the Profile for Octopus  Click Here to Email Octopus     
And the refrigerator is doing something, it's allowing the house to stay up. Maybe I misunderstood what was stated in the article, but I don't think so.

"A further five cells appeared to serve no logical purpose at all--there was no route of connections by which they could influence the output. And yet if he disconnected them, the circuit stopped working."

jig posted 06-18-99 10:28 AM ET     Click Here to See the Profile for jig  Click Here to Email jig     
Oct: What's wrong with using a refrigerator as a wall? Hmmm...why don't humans ever design things like that?
OhWell posted 06-18-99 11:29 AM ET     Click Here to See the Profile for OhWell    
I caught the last few minutes of a TLC (or was it Discover Channel?) program the other night. At the time, a guy was talking about machine intelligence applied to space probes. They needed something so that the probe could get buy on it�s own without getting into trouble due to time lag in the radio signals. Anyway, he made the statement that they (the probes) were �about as intelligent as the average bug�.

Octopus: Was there a particular article or paper that you and Tolls were discussing ? Do you have a URL to it?

Thoughts on useless parts: If humans use only 10% of the brain, what does the other 90% do? Could that be a �useless part� that holds up the human intelligence. In a complex system the functions and interactions of individual components are not always readily apparent.

Octopus posted 06-18-99 09:42 PM ET     Click Here to See the Profile for Octopus  Click Here to Email Octopus     
The URL was in Tolls post of 06-16-99 04:37 AM ET, but he didn't hyperlink it like this: http://www.newscientist.co.uk/ns/971115/features.html. He also directed us to http://www.newscientist.co.uk/ns/971115/features.html but I haven't looked at that yet.

"If humans use only 10% of the brain, what does the other 90% do?"

That "humans only use 10% of their brains" thing is one of those "well known facts" that has no basis in reality. Not all parts of the brain are active at all times, but there is little or no "dead weight" inside your skull.

Octopus posted 06-18-99 09:44 PM ET     Click Here to See the Profile for Octopus  Click Here to Email Octopus     
Oops.

http://www.newscientist.com/nsplus/insight/ai/ai.html

SnowFire posted 06-19-99 12:35 PM ET     Click Here to See the Profile for SnowFire  Click Here to Email SnowFire     
On neurons being really complex: Hey, why do we have to use mechanical neurons? Couldn't microchips simulate neurons?

Also, why shouldn't machines fear death? In fact, that's one thing that might keep them from the ever-so-feared "AI kills all humans" thing- we're integral to running those computers and providing input and designing new computers and keeping the power plants running, etc.

Hence the inherent need for the laws of robotics outlined by Asimov.

I think everyone should not just read the FAQ on the meaning of life, but click on all the links as well. There's one explaining why the Asimov Laws would never work, and would cause the computer to either behave irrationally (get mentally ill) or all the "proscribed thoughts" would break free over the programming.

http://singularity.posthuman.com/AI_design.temp.html#pre_prime

Read the other parts too- great info on Seed AI's and stuff!

On the 10% of our brains thing: As Octopus says, we use all our brains all the time. What I think this refers to is that capacity of our brain. We store memories by combinations of neurons. Let's assume, for the sake of argument, that we have 26 neurons each named after a letter of the alphabet. While all of these neurons are probably being used for several memories, the total amount of memories/sensations/etc. that can be stored is (26*25*24...*1)+(26*25*24...*2)+(26*25*24...*3)+...+26*25+26. That's for every possible combination, from simply "A" to "GERLHA" to "QWERTYUIOPASDFGHJKLZXCVBNM." Use a few billion neurons, and you see that using only 10% of the possible combinations is still a lot of memories.

SnowFire posted 06-19-99 12:49 PM ET     Click Here to See the Profile for SnowFire  Click Here to Email SnowFire     
Sorry about the double post, but here's a great quote from that site, detailing one of the many problems with the Asimov Laws:

What I'm pointing out is that coercions are "above" the AI, and are supposed to have authority over the AI, so that when they break, they totally screw up the system - including all the other coercions. It is possible to design an elegant rational system that deals with rational conflicts. I'd really like to see an elegant rational system that deals with broken pieces of specialized override code. It would be fun to watch it fail. Imagine a corporation where all the layers of management demand absolute obedience from those below, and refuse to listen to any questions. Now imagine giving all the managers LSD. It doesn't matter how smart the engineers are, that company is going to die. (In fact, who needs LSD? The company would probably die in any case from accumulated natural stupidity. Happens all the time.) And if the coercions aren't absolute, if they're just suggestions, then what's the point?

[snip]

The Principle of Irrationality: If you design an irrational system, it will become irrational in a way you don't expect.

Tell an AI that two and two make five, and it will conclude that five equals zero.

OhWell posted 06-21-99 08:41 AM ET     Click Here to See the Profile for OhWell    
Octopus: Thanks for the URL, I will check out that site.

Saw �Matrix� over the weekend. We DO NOT need machine intelligence!

OhWell, runs away screaming. Stops. Thinks; �Wait I can control what happens here!� Turns and makes bullets stop in mid air. Then, makes bullets turn around and shoot back at the bad guys! Yea, that�s the ticket!

SnowFire posted 06-24-99 12:57 PM ET     Click Here to See the Profile for SnowFire  Click Here to Email SnowFire     
Oh Well: Read the Singularity site. I'd say that's a pretty good solution to the world's problems; I think we need machine intelligence.
GaryD posted 06-25-99 08:51 AM ET     Click Here to See the Profile for GaryD    
Why do I spot these interesting ones after they've reached a massive length ?

Did our previous threads ever come to a conclusion as to what intelligence is anyway ?

As I see it, if you are of the opinion that human intelligence solely arises from the interaction of the neurons within your brain, then you must surely believe in the eventual creation of machine intelligence (if we, or some other species live long enough to perform the creation). It is just a case of achieving the ability to produce the level of complexity found in the human brain (or near to it). A massive neural network perhaps. This probably implies even machine intelligence has to evolve.

If, on the other hand, you think that the brain just acts to a) keep the maintenance tasks going for the body, and b) interpret the control of something more spiritual/soul-like then the whole situation becomes much more complex.

Initially it would seem as if you were doomed to achieve machine intelligence as the best you could achieve would be these maintenance tasks and responses that may look intelligent, but that has not intelligence behind them. But, if a soul/spirit is able to inhabit a carbon based body who is to say that it could not inhabit a silicon based one instead ?

Silicon heaven, after all where else would all the old calculators go to ?

So which murderer is going to pull the mains plug out ?

Now excuse me as I try that link out and see what all the fuss is about.

GaryD posted 06-25-99 09:06 AM ET     Click Here to See the Profile for GaryD    
Wow ! but what a lot of work for a simple circuit though. Is this ever likely to be viable in the marketplace ? Not surprised that there were problems transferring the circuit elsewhere. The first thing #I thought of was, what about race hazards. If the design is going to depend on un-guaranteed aspects like the capacitance/inductance/etc. Still one to keep an eye on !
SnowFire posted 06-25-99 12:41 PM ET     Click Here to See the Profile for SnowFire  Click Here to Email SnowFire     
GaryD: I only recently got it, but I suggest you follow the Singularity site mentioned before author's suggestion and read Godel, Escher, Bach. In short, in contrary to our good American democracy, there are varying degrees of souls (else why would we kill flies, cows, etc.?). A certain book, Principia Mathematica, was even said to have a rudimentary soul (or rather the Math contained in side it did). And machines should also learn to posses souls as well. But not the ones we have currently.
JayPegg posted 06-25-99 01:10 PM ET     Click Here to See the Profile for JayPegg  Click Here to Email JayPegg     
I just discovered this topic and I dont have all the time to read it. I dont know if anyone has covered this yet, but we are already teaching AI.

We are giving computers objectives. They then try to complete that objective. the first time the computer doesn't do such a great job, it makes mistakes, it get lost, stuff like that. This is first generation.

It then (the computer) takes what it learned in first gen to the second gen. It gets better, but still needs lots of work.

In about, say 1000 gens (what would be about a day or two) you come out with a perfect program, one that cannot be beat. What was that? Did I hear someone say Darwin?
GaryD posted 06-25-99 01:56 PM ET     Click Here to See the Profile for GaryD    
Now I recognise this place (singularity). Been here before and thought "Gawd what a lot of stuff. Maybe I'll come back and read some of it sometime."

Eh ? Oh you're referring to the 'Eternal Golden Braid' book are you. So much to read already, so little time...

Maybe I'll get it sometime but not this weekend I guess. Thanks for the tip though.

SnowFire posted 06-25-99 08:53 PM ET     Click Here to See the Profile for SnowFire  Click Here to Email SnowFire     
JayPegg: Read the site! Those generations spin by really fast. The key issue is an AI that can rewrite and improve itself. Then it can help rewriting it. Intelligence leads to technology, and now technology, through AI's, leads to more intelligence (through rewriting and expanding its code). Intelligences build intelligences build intelligences that are soon all but indistingusihable from the original. And really, really fast too. It's a positive feedback loop spiraling up to infinity.
JayPegg posted 06-25-99 11:35 PM ET     Click Here to See the Profile for JayPegg  Click Here to Email JayPegg     
Now that's what I'm talking about Snowfire. Glad to see someone is on track.
GaryD posted 06-28-99 12:52 PM ET     Click Here to See the Profile for GaryD    
Ok ok you got me. I've ordered the darn book. I'll blame you lot if it ain't any good !

Meanwhile, "Wot no more posts ?"

Artificial intelligence, if possible at all, must be possible from a deterministic computer as true randomness can not exist. In any case 'real' intelligence is demonstrated from humans and their actions are predetermined, so there is no problem.

DerekM posted 06-28-99 02:04 PM ET     Click Here to See the Profile for DerekM  Click Here to Email DerekM     
What the hell is "artificial" intelligence? It can't just mean, "made by man," because we do that all the time in the form of babies. It doesn't refer to silicon, because silicon is just a means. We used to do the same thing with vacuum tubes, we may do the same thing in the future with light, neurons or particle spin states.

Does it mean the same as artificial flavors? An artificial flavor is a flavor which mimics another flavor. So, are we talking about mimicing human abilities? That seems awfully limiting -- why would you want something that is just the same as what you already have (people)?

It seems to me that what people think about when they think about AI is creations which have the abilities of computers (the ability to do repetitive tasks with a high degree of precision without tiring) with the abilities of humans (inductive reasoning, artistic flare, intuitive recognition of complex patterns like voices and faces, etc.).

What would be the point? Why wouldn't we just enhance our own abilities to handle repetitive tasks, rather than create new creatures that can do both? Is this a flaw in how AI is commonly depicted in fiction?

Tolls posted 06-29-99 04:12 AM ET     Click Here to See the Profile for Tolls  Click Here to Email Tolls     
To some extent the point is to try and further our understanding of how the brain functions, I suppose. The closer we get to being able to mimic it, the closer we get to understanding it.
GaryD posted 06-29-99 04:57 AM ET     Click Here to See the Profile for GaryD    
DerekM: "What the hell is "artificial" intelligence ?" I asked much the same in the previous thread. (Gawd knows where it went.) If I recall correctly the only reply I had was along the lines of, it was intelligence that had been created as opposed to going into existence naturally (or something like that). I guess that is as good a definition as any.

The definition isn't restricted to mimicking humans, but as this is arguably the best example of intelligence we have to date, it is a good place to start. We agree humans are intelligent (to a greater or lesser degree) so let's go for that.

"Why wouldn't we just enhance our own abilities to handle repetitive tasks, rather than create new 'creatures' ?" I guess because we'd rather live a life of luxury, pursuing our own interests, than having to continue with necessary chores all of our lives. In any case, because it is a challenge is probably reason enough in itself.

Thread ClosedTo close this thread, click here (moderator or admin only).

Post New Topic  Post A Reply
Hop to:

Contact Us | Alpha Centauri Home

Powered by: Ultimate Bulletin Board, Version 5.18
© Madrona Park, Inc., 1998.