Suikoden Uncouth and Informational Karma Old Xperience

Suikox Home | The Speculation Shelter | Tablet of Stars | Suikoden Timeline | Suikoden Geography |Legacies


  [ View Profile | Edit Profile | Nation System | Members | Groups | Search | Register | Check PMs | Log in | FAQ ]

If we could, should we develope AI?

 
Post new topic   Reply to topic     Forum Index -> Community Forum
View previous topic :: View next topic  
Author Message
Septimus




Joined: 06 May 2007
Post Count: 39

43015 Potch
0 Soldiers
0 Nation Points

PostPosted: Thu Oct 11, 2007 1:48 pm    Post subject: If we could, should we develope AI? Reply with quote Add User to Ignore List

What do you think of these scenarios presented in stuff like The Matrix or I, Robot, in which true AI has been invented? Would we take a clear risk, inventing it, as it would inevitably evolve beyond our control? Or is it the next step in evolution?

True AI would be truely interesting if invented, though it could evolve in an extraordinary rate, beyond our control. If you could give a person the calculatory speed of a computer, you'd start seeing a lot of interesting stuff happening with the psyche. Some people mean computers are the next step in evolutioin. It could be, but most people that say this are quasi-scientists that are merely hoping, and don't really see the logical bounderies if their statement.

True AI is hard to distinguish from really well made artificial AI, which I think will be the next step. We could do really interesting stuff if someone sat down and make a really complex AAI, and releasing that in a game would irrevocably lead to unlimited possibilities. So I think I can settle for that first.

I don't think AI going 'out of control' is a far-fetched scenario. It would probably be based on humans, which are evolutionistic, which, even if we didn't expect it, may start caring for its own being. And if an AI could update itself, it could really do some fascinating stuff on it's own, which may be "out of control" or unpredicted stuff.
Back to top
View user's profile Send private message
retrospect.

Cosby You! Black Entertainer


Joined: 10 Mar 2007
Post Count: 431
Location: Lion's Maw
102725 Potch
420 Soldiers
30 Nation Points

PostPosted: Thu Oct 11, 2007 4:17 pm    Post subject: Reply with quote Add User to Ignore List

personally robots should never have emotions or human qualities. thats when they kill everyone.
_________________
Back to top
View user's profile Send private message
Ujitsuna

Red Shoes Dance


Joined: 24 May 2006
Post Count: 4823
Location: Pale Plains
936547 Potch
12000 Soldiers
675 Nation Points

PostPosted: Thu Oct 11, 2007 4:19 pm    Post subject: Reply with quote Add User to Ignore List

The scenario painted in The Matrix scares the hell out of me, but that is just fantasy. Although other than for helping and maybe caring for humans or doing their jobs, I don't see why we'd need to make robots that do everything humans can.
Back to top
View user's profile Send private message MSN Messenger
Gil-galad

Flame of the West


Joined: 04 Jul 2004
Post Count: 6007
Location: Aya Sankha
2849957 Potch
200 Soldiers
46 Nation Points

PostPosted: Thu Oct 11, 2007 4:20 pm    Post subject: Reply with quote Add User to Ignore List

If it was true artificial intelligence-- then I honestly don't think we have any more to worry about these machines than we do of the people around us. As long as we don't make AI so vastly superior to ourselves, we should be safe. heh. That's not to say we should limit the development of our machine's computer powers-- by all means they're developing computers now that surpass the computing power of the human mind (or so I saw in the now-famous "Shift Happens" video), so it should be fine so long as we keep these much more powerful new machines from any sort of form where they can cause physical or infrastructural harm to us.

To be honest, I don't think we'd be in any more danger from machines that we are in from ourselves. Especially because this artificial intelligence is based on our intelligence.
_________________

Back to top
View user's profile Send private message Send e-mail Yahoo Messenger MSN Messenger
Septimus




Joined: 06 May 2007
Post Count: 39

43015 Potch
0 Soldiers
0 Nation Points

PostPosted: Thu Oct 11, 2007 5:44 pm    Post subject: Reply with quote Add User to Ignore List

Gil-galad wrote:
To be honest, I don't think we'd be in any more danger from machines that we are in from ourselves. Especially because this artificial intelligence is based on our intelligence.


And you don't find that scary?
Back to top
View user's profile Send private message
Gil-galad

Flame of the West


Joined: 04 Jul 2004
Post Count: 6007
Location: Aya Sankha
2849957 Potch
200 Soldiers
46 Nation Points

PostPosted: Thu Oct 11, 2007 7:17 pm    Post subject: Reply with quote Add User to Ignore List

If you're not afraid of the dangers of humanity, then you shouldn't be too afraid of the dangers that AI pose. If you are afraid of the dangers of humanity, then... well, you're already afraid so what difference does it make?
_________________

Back to top
View user's profile Send private message Send e-mail Yahoo Messenger MSN Messenger
retrospect.

Cosby You! Black Entertainer


Joined: 10 Mar 2007
Post Count: 431
Location: Lion's Maw
102725 Potch
420 Soldiers
30 Nation Points

PostPosted: Fri Oct 12, 2007 6:39 am    Post subject: Reply with quote Add User to Ignore List

Gil-galad wrote:
If you're not afraid of the dangers of humanity, then you shouldn't be too afraid of the dangers that AI pose. If you are afraid of the dangers of humanity, then... well, you're already afraid so what difference does it make?


robot > human, that's why. robots, given logic, and the capacity to evolve intellectually, would do it considerably faster because they understand the capability of robothood and since they are more logical and methodical than people are, thus begins the death of mankind. it's just science.
_________________
Back to top
View user's profile Send private message
Loran Cehack

Moonlight Butterfly


Joined: 30 Mar 2005
Post Count: 1891
Location: Dunan Delta
203221 Potch
200 Soldiers
122 Nation Points

PostPosted: Fri Oct 12, 2007 6:58 am    Post subject: Reply with quote Add User to Ignore List

What purpose would AI have in killing us? It's not as if they'd have a reason to. Unless, of course, we grossly abused them in such a way (which would be pretty typical of us) that we'd probably deserve it if they turned on us.
_________________
Back to top
View user's profile Send private message
Drizzt

Rangers Of Mielikki


Joined: 12 Oct 2004
Post Count: 1434

250081 Potch
0 Soldiers
0 Nation Points

PostPosted: Fri Oct 12, 2007 10:05 am    Post subject: Reply with quote Add User to Ignore List

I'd have to simply agree with Hayashi Ujitsuna, but more to the point of once we have true or more reliable AI, that's when 99.9% of the population of the planet kindly lose their jobs; for legitimate reasons of coarse (businesses wouldn't want the Robot Unions on their asses.)

Seriously though (I've just been watching the Matrix and the first Resident Evil movie) I think this concept is just too dangerous which may outweigh the benefit. My true reasons mostly revolve around how computers pretty much lose a lot of people jobs every day. In Printing (my trade) the folk who make plates (which are used to transfer the image being printed) used to be done by a whole team of people. Now it's done by just one or two people each 24 hour period. Productive and good-business, yes, but overall just bad for the folk who rely on the money the job makes. :wink:
_________________
“Firbolgs die with honor,” Morten explained as the logs beneath Tavis began to burn. “We don’t beg for mercy. We don’t show pain. We just die.”
“Maybe we skin you alive,” Noote warned. “That hurt plenty."
- The Twilight Giants, Book I.
Back to top
View user's profile Send private message
ThricebornPhoenix

Phoenix Legion


Joined: 04 Sep 2007
Post Count: 46

102576 Potch
0 Soldiers
0 Nation Points

PostPosted: Sat Oct 13, 2007 5:47 am    Post subject: Re: If we could, should we develope AI? Reply with quote Add User to Ignore List

Septimus wrote:
I don't think AI going 'out of control' is a far-fetched scenario. It would probably be based on humans, which are evolutionistic, which, even if we didn't expect it, may start caring for its own being.


Actually, it would be based on human perceptions of humans, which is altogther more frightening!

It would be foolish, I think, to place no restrictions on an AI that is not in a self-contained environment from which it cannot escape or interact with the outside world. Everything else we make is created with safeguards/failsafe mechanims, and why should a machine that can think for itself (something many people can hardly do) be different?

I also think the AI in the book I, Robot, though more boring, is more plausible than that depicted in the movie. When mundane problems will surely abound, why look for the incredible? In short, I don't believe we have any reason to fear advanced AI any more than anything else we've made.
_________________
"[...] even a seemingly purposeless miracle is an inexhaustible source of hope, because it proves to us that since we do not understand everything, our defeats—so much more numerous than our few and empty victories—may be equally specious."
Back to top
View user's profile Send private message Send e-mail
Display posts from previous:   
Post new topic   Reply to topic     Forum Index -> Community Forum All times are GMT - 4 Hours
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
suikox.com by: Vextor


Powered by phpBB © 2001, 2005 phpBB Group
  Username:    Password:      Remember me