Jump to content
IGNORED

The AI Thread


Recommended Posts

Here's the idea: Let's assume that people will one day create artificial intelligence, AI. Say there is one android that kills its owners. I think we should punish the one android as if it's a human [prison or death....or perhaps erase its memory?] and go on with our business. What's to say that an android that glitches out and kills its owners is no more than a serial killer that's a glitch within the human species?

Link to comment
https://forum.watmm.com/topic/43067-the-ai-thread/
Share on other sites

it's not a stretch to imagine a short circuit or malfunction in a murderer's brain by-passing 'normal' empathy and restraint.

 

but maybe killing is a deep seated human instinct going back through the ages, which we have since learned to supress?

 

 

 

edit for poor snetence structure

Edited by keltoi

jjbms1.jpg

 

  Reveal hidden contents

 

Link to comment
https://forum.watmm.com/topic/43067-the-ai-thread/#findComment-963155
Share on other sites

  Braintree said:
Here's the idea: Let's assume that people will one day create artificial intelligence, AI. Say there is one android that kills its owners. I think we should punish the one android as if it's a human [prison or death....or perhaps erase its memory?] and go on with our business. What's to say that an android that glitches out and kills its owners is no more than a serial killer that's a glitch within the human species?

 

Disassemble.

 

 

Dead.

 

 

Link to comment
https://forum.watmm.com/topic/43067-the-ai-thread/#findComment-963223
Share on other sites

Have you guys ever read about The Chinese Room Argument?

It's quite interesting if you're interested in AI:

 

  Quote
The Chinese Room argument comprises a thought experiment and associated arguments by John Searle (Searle 1980), which attempts to show that a symbol-processing machine like a computer can never be properly described as having a "mind" or "understanding", regardless of how intelligently it may behave.

 

Chinese room thought experiment

 

Searle requests that his reader imagine that, many years from now, people have constructed a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, using a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a human Chinese speaker. All of the questions that the human asks it receive appropriate responses, such that the Chinese speaker is convinced that he or she is talking to another Chinese-speaking human being. Most proponents of artificial intelligence would draw the conclusion that the computer understands Chinese, just as the Chinese-speaking human does.

 

Searle then asks the reader to suppose that he is in a room in which he receives Chinese characters, consults a book containing an English version of the aforementioned computer program and processes the Chinese characters according to its instructions. He does not understand a word of Chinese; he simply manipulates what, to him, are meaningless symbols, using the book and whatever other equipment, like paper, pencils, erasers and filing cabinets, is available to him. After manipulating the symbols, he responds to a given Chinese question in the same language. As the computer passed the Turing test this way, it is fair, says Searle, to deduce that he has done so, too, simply by running the program manually. "Nobody just looking at my answers can tell that I don't speak a word of Chinese," he writes.[1]

 

This lack of understanding, according to Searle, proves that computers do not understand Chinese either, because they are in the same position as he — nothing but rote manipulators of symbols: they do not have conscious mental states like an "understanding" of what they are saying, so they cannot fairly and properly be said to have minds.

Link to comment
https://forum.watmm.com/topic/43067-the-ai-thread/#findComment-963226
Share on other sites

Guest Drahken

Yea thats a good argument. I don't doubt we'll some day develop an AI though. I don't think it will come from a purely computer science and hardware oriented field. Computational Neuroscience is where its at, and there are quite a few programs in the works that are figuring out more and more about how the human mind works and recreating those structures on computers. That Japanese program mapping neural signals is a good example.

 

I wouldn't be surprised that if and when an AI is developed to a 'conscious' level it will be some kind of programming or modification of a biological system rather than a technical one. First they'll figure out how to modify mice brains to perform certain tasks, then they'll figure out how to grow them in a dish and do the same thing.

Link to comment
https://forum.watmm.com/topic/43067-the-ai-thread/#findComment-963239
Share on other sites

  Drahken said:
I wouldn't be surprised that if and when an AI is developed to a 'conscious' level it will be some kind of programming or modification of a biological system rather than a technical one. First they'll figure out how to modify mice brains to perform certain tasks, then they'll figure out how to grow them in a dish and do the same thing.

 

There's a Danish science magazine called Illustrated Science that I receive every month or so and they had an article about this. Fuck - if only I could remember in what issue it was? All I can remember was that they had somehow attached a robot to the brain of a rat/mouse and this made the robot move around...

Link to comment
https://forum.watmm.com/topic/43067-the-ai-thread/#findComment-963242
Share on other sites

Guest Drahken

Yea, one of the things our generation may come to face with is a shift in what we consider 'artificial'. Right now that usually applies to things that are non-organic, but I forsee a shift in that as we move away from emulating organics with technology and move into creating technology within organics.

Link to comment
https://forum.watmm.com/topic/43067-the-ai-thread/#findComment-963279
Share on other sites

Guest tv_party

The chinese room argument is unsatisfying for me as a decisive augury regarding AI, until we can give a more satisfying and objective definition of consciousness and understanding, if that's even possible.

Ostensibly the human mind is no more than a more than a complex collection of specialized stimulus-response/symbol processing engines orchestrated by an outer executive layer.

An error in the chinese room argument is that it is comparing a general purpose machine to a specialized one.

The general purpose machine (human mind) is demostrably made up of many specialized machines that have no awareness of the other parts.

To me it's like comparing a cell to an organism disregarding the difference in complexity.

The legacy hubris of priveleged perspective leads us to favor dualistic assumptions that have real potential to mislead IMO.

That said I do find emergent behavior interesting.

 

edit: as a sidenote on a related tangent I recommend reading Oliver Sacks work

Edited by tv_party
Link to comment
https://forum.watmm.com/topic/43067-the-ai-thread/#findComment-963288
Share on other sites

  assegai said:
  abusivegeorge said:
  scones to die for said:
  assegai said:
I thought this was an Al thread, left disappointed.

alborland.gif

Me too.

 

happydaysaldelvechia.jpg

 

al-gore.jpg

capone.jpg

happydaysaldelvechia.jpg

 

albundy.jpg

 

Link to comment
https://forum.watmm.com/topic/43067-the-ai-thread/#findComment-963300
Share on other sites

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   1 Member

×
×