open All Channels
seplocked Out of Pod Experience
blankseplocked A.I.
 
This thread is older than 90 days and has been locked due to inactivity.


 
Author Topic

Nebulous
Minmatar
Mirkur Draug'Tyr
Ushra'Khan
Posted - 2004.08.24 06:33:00 - [1]
 

Maybe ive been watching to many films but i think the birth of A.I. could be very dangerous.Its imposible to know how a machine would think, robots at the moment have the equivelant brain power of a slug which means they have the instinct of self preservation, so imagine a robot with a humans brain power! you try switching it off then its gonna turn quite nasty :).

tratten
Posted - 2004.08.24 06:53:00 - [2]
 

Why would it have the instinct of self preservation?
That animal instinct must have come out of natural selection in the dawn of life. The organisms that had it had greater chance of survival.

But yes, there is a chance that the A.I. will become 'evil' (from our perspective). But I'm optimistic and think the creation of artificial life to be one of the greatest achievements and will serve humankind.

Scorpyn
Caldari
Infinitus Odium
Posted - 2004.08.24 09:40:00 - [3]
 

Originally by: tratten
Why would it have the instinct of self preservation?

If you make a robot that doesn't have that, you'll have to spend a lot of money repairing it.

Kees
Minmatar
Posted - 2004.08.24 09:55:00 - [4]
 

This is where Asimov was such a foreward thinker with his 3 laws:

1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


tratten
Posted - 2004.08.24 11:02:00 - [5]
 

Originally by: Scorpyn
Originally by: tratten
Why would it have the instinct of self preservation?

If you make a robot that doesn't have that, you'll have to spend a lot of money repairing it.

Embarassed Good point!

Originally by: Kees
This is where Asimov was such a foreward thinker with his 3 laws:
1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


It's a nice solution, but probably very hard to translate the laws to ones and zeros.
I'm a big fan of Asimov's robot and foundation books, and the the extension with the "Zeroth Law" seems like a natural step:
0. A robot may not injure humanity, or, through inaction, allow humanity to come to harm.

Scorpyn
Caldari
Infinitus Odium
Posted - 2004.08.24 11:54:00 - [6]
 

Originally by: Kees
This is where Asimov was such a foreward thinker with his 3 laws:

1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


Do you realize how easy it is to work around that? Just order it to press a button or something.

Jael Markinsen
Aliastra
Posted - 2004.08.24 12:04:00 - [7]
 

Edited by: Jael Markinsen on 24/08/2004 12:17:18
I think the film that can best sum this up and to put into perspective a fictional account of what you are referring to would be this: SKYNET-Technical data. this is a very comprehensive site listing lots of tech details for all terminator fans. Scroll to bottom of the page (TERMINATOR SERIES UNITS) for an example of what I mean. Shocked

EDIT: fixed link

Kees
Minmatar
Posted - 2004.08.24 12:13:00 - [8]
 

Originally by: Scorpyn
Originally by: Kees
This is where Asimov was such a foreward thinker with his 3 laws:

1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


Do you realize how easy it is to work around that? Just order it to press a button or something.


A rather bold statement with no explanation. However we are all entitled to opinions.

Anyway, Asimov never intended for the laws to be applied to real life situations, they were intended to form the basis of many stories.

Various thoughts can be found on the net, including why the Three Laws of Robotics aren't used in the real world and a history of Asimov and his works

As you can see, if you follow the links, he was a very intelligent man and create many talking, and indeed, thought-provoking ideas. Indeed it may be 'easy' to work around in your thoughts but the laws were the basis of many good stories and not real life.

Scorpyn
Caldari
Infinitus Odium
Posted - 2004.08.24 22:14:00 - [9]
 

The first link was basically the same as I wrote. Haven't got the time to check the rest atm, but I noticed that he'd written "I Robot".

Directive
Posted - 2004.08.25 06:33:00 - [10]
 

Originally by: Nebulous
Maybe ive been watching to many films but i think the birth of A.I. could be very dangerous.Its imposible to know how a machine would think, robots at the moment have the equivelant brain power of a slug which means they have the instinct of self preservation, so imagine a robot with a humans brain power! you try switching it off then its gonna turn quite nasty :).


Actually the last I read, they're capable of making an A.I. with comparative IQ of a 2 year old human child.

Anatolius
Amarr
PIE Inc.
Posted - 2004.08.25 07:24:00 - [11]
 

Originally by: Directive
Actually the last I read, they're capable of making an A.I. with comparative IQ of a 2 year old human child.


We've yet to create artificial intelligence. We have, of course, achieved artificial stupidity, but there's so much natural stupidity around.. Very Happy

Call me when a robot knows what I mean when I say 'sunset', because that robot has *experienced* the glory of a brilliant red sky. Call me when my hot robot maid can tell me to go bugger off and get my own beer out of the fridge, via thought and the understanding of exactly what 'buggering off' is, rather than a random number generator and a table of pre-packaged insults.

I dare say we're not going to have to worry about any robotic doomsday scenarios any time soon. And if we ever do give birth to a true artificial intelligence, it will almost certainly be modelled after humanity. In that case, there's nothing to fear, because the robots will just sit around drinking cheap beer and watching low-brow television shows all day. Very Happy

Scorpyn
Caldari
Infinitus Odium
Posted - 2004.08.25 07:44:00 - [12]
 

Edited by: Scorpyn on 25/08/2004 07:48:35
Making a good AI is kinda complicated...

* Anything that is intelligent needs to be able to learn
* Anything that can learn will make mistakes
* Anything that can learn will be able to change it's intentions and purpose depending on the situation

For example, let's say you design a robot to clean your house. It gets frustrated by how your cat throws up furballs every now and then, and puts the cat in the trashcan because it's the source of dirt - remove the source, and there will be no dirt.

Another example : You send it away to buy some milk. In addition to the milk, you will also find some "dog leftovers" in the bag.

RedClaws
Amarr
Macabre Votum
Morsus Mihi
Posted - 2004.08.25 07:52:00 - [13]
 

Dog leftovers? how did ya get those?

Scorpyn
Caldari
Infinitus Odium
Posted - 2004.08.25 07:58:00 - [14]
 

Edited by: Scorpyn on 25/08/2004 07:59:22
Originally by: RedClaws
Dog leftovers? how did ya get those?

The robot was designed to clean, wasn't it? Since you have a cat, it wil recognize it as dirt and bring it to the trashcan - or, possbily, believe that someones mailbox is a trashcan and you'll have a very angry neighbour.

Anatolius
Amarr
PIE Inc.
Posted - 2004.08.25 11:57:00 - [15]
 

Originally by: Scorpyn
For example, let's say you design a robot to clean your house. It gets frustrated by how your cat throws up furballs every now and then, and puts the cat in the trashcan because it's the source of dirt - remove the source, and there will be no dirt.


I'm confused. This is a good thing, yes?

*hides from cat lovers*

Scorpyn
Caldari
Infinitus Odium
Posted - 2004.08.25 12:29:00 - [16]
 

Edited by: Scorpyn on 25/08/2004 12:30:36
Hmm... where did Anatolius go... he was here just a second ago... Question


 

This thread is older than 90 days and has been locked due to inactivity.


 


The new forums are live

Please adjust your bookmarks to https://forums.eveonline.com

These forums are archived and read-only