Jump to content
IGNORED

Proof of Animal Consciousness Thread


Recommended Posts

  On 9/24/2016 at 9:44 PM, LimpyLoo said:

Yes, but what should we tell them to do?

 

You say "what their training data suggests they SHOULD do"...you need to have an aim/goal/values to have a "should"...and they (aims etc) won't just magically appear...they need to be put there by humans

 

So again, we're back to WHAT goals/aims/values should be picked, and we're back to the need for philosophy and ethics and all that

 

Maximum number of lives saved.  What other option is there.  Maybe we can be a bit objective-erring-on-the-side-of-becoming-immoral and apply weights to individual lives as well.  Save the President over two murderers.  

 

Realistically though if every car is self driving and we outlaw non-self-driving cars we can make it so no accidents will ever happen.  Every car can scan the area they're in constantly, above and below, in front and behind on each side, and share that information over the internet with every other car in the world.  And literally they can avoid all accidents.  Computers solve problems like this all the time with concurrent scheduling algorithms, it's stupidly easy for them

what if ur late for a haircut and the shortest way to the barber is to drive through a school yard tho??

The real issue of artificial intelligence is that it is impossible to 100% probe the range and domain of a sufficiently advanced decision making system to test its behavior.  You can't see "Hey it's trained in this way so it will do this, and we can now verify it probably won't do this" and trust it, and you cannot brute force it because the domain is too large, the universe would die before we can do that.  You have to add in manual overrides for output behavior which limit the decision making abilities of the system.  That's my understanding, I haven't studied the topic as much as I should have.  

so like the pc wouldnt be able to be sure 100% if it was recess or lunch time or if the kids were out playin??

or liek mebe the computer thinks your hair looks good already so it drives you to the restaurant but you are already full??


def a lot of things to take into account!!

  On 11/28/2015 at 1:14 AM, Bechuga said:

Crows wait until your back is turned before swooping in and claiming that french fry someone dropped on the ground.

They do this because they're ashamed to be seen eating food off of the ground.

 

Need I say more?

The year ago me is still on point

 

Also when did AI become animals?

  On 9/24/2016 at 9:55 PM, Zeffolia said:

 

  On 9/24/2016 at 9:44 PM, LimpyLoo said:

Yes, but what should we tell them to do?

 

You say "what their training data suggests they SHOULD do"...you need to have an aim/goal/values to have a "should"...and they (aims etc) won't just magically appear...they need to be put there by humans

 

So again, we're back to WHAT goals/aims/values should be picked, and we're back to the need for philosophy and ethics and all that

Maximum number of lives saved. What other option is there. Maybe we can be a bit objective-erring-on-the-side-of-becoming-immoral and apply weights to individual lives as well. Save the President over two murderers.

 

Realistically though if every car is self driving and we outlaw non-self-driving cars we can make it so no accidents will ever happen. Every car can scan the area they're in constantly, above and below, in front and behind on each side, and share that information over the internet with every other car in the world. And literally they can avoid all accidents. Computers solve problems like this all the time with concurrent scheduling algorithms, it's stupidly easy for them

I don't know if you're making a subtle joke or what, but the Utilitarianism vs Deontology debate has been around for a long-ass time...

 

there are some serious problems with "save the most amount of lives" as a moral heuristic...if that were the Highest Good, then we would be morally obliged to (e.g.) kill ONE person and give his organs to FIVE people on a transplant waiting list

 

In short, should we ACTUALLY be pushing one person off a bridge to save the five people down below? Or do we as individuals have a right not to be pushed off bridges for the greater good, or have our organs stolen to save the many?

  On 9/24/2016 at 11:32 PM, LimpyLoo said:

 

  On 9/24/2016 at 9:55 PM, Zeffolia said:

 

  On 9/24/2016 at 9:44 PM, LimpyLoo said:

Yes, but what should we tell them to do?

 

You say "what their training data suggests they SHOULD do"...you need to have an aim/goal/values to have a "should"...and they (aims etc) won't just magically appear...they need to be put there by humans

 

So again, we're back to WHAT goals/aims/values should be picked, and we're back to the need for philosophy and ethics and all that

Maximum number of lives saved. What other option is there. Maybe we can be a bit objective-erring-on-the-side-of-becoming-immoral and apply weights to individual lives as well. Save the President over two murderers.

 

Realistically though if every car is self driving and we outlaw non-self-driving cars we can make it so no accidents will ever happen. Every car can scan the area they're in constantly, above and below, in front and behind on each side, and share that information over the internet with every other car in the world. And literally they can avoid all accidents. Computers solve problems like this all the time with concurrent scheduling algorithms, it's stupidly easy for them

I don't know if you're making a subtle joke or what, but the Utilitarianism vs Deontology debate has been around for a long-ass time...

 

there are some serious problems with "save the most amount of lives" as a moral heuristic...if that were the Highest Good, then we would be morally obliged to (e.g.) kill ONE person and give his organs to FIVE people on a transplant waiting list

 

In short, should we ACTUALLY be pushing one person off a bridge to save the five people down below? Or do we as individuals have a right not to be pushed off bridges for the greater good, or have our organs stolen to save the many?

 

 

We only have the right to resist them pushing us without feeling bad

Utilitarism is not about the number of lives that can be saved but the amount of joy and suffer that is felt I think. A suffering person's life is not as valuable as a happy person's life. A murderer will cause a lot of suffering while a doctor can cure it. The sum of joy has to be maximized and the sum of suffering minimized. The problem is that joy and suffering cannot be measured. But it can indeed be roughly estimated and doing that we can easily see that we're not living in an utilitarist society: industrial livestock farming, nationalism and individualism is not utilitarist.

Edited by darreichungsform
  On 9/24/2016 at 11:53 PM, darreichungsform said:

Utilitarism is not about the number of lives that can be saved but the amount of joy and suffer that is felt I think. A suffering person's life is not as valuable as a happy person's life. A murderer will cause a lot of suffering while a doctor can cure it. The sum of joy has to be maximized and the sum of suffering minimized. The problem is that joy and suffering cannot be measured. But it can indeed be roughly estimated and doing that we can easily see that we're not living in an utilitarist society: industrial livestock farming, nationalism and individualism is not utilitarist.

So the car should err on the side of killing sad people? What about 3 sad people vs 2 happy people? Maybe since it's stressful being black in America--and therefor black people are probably less happy on average than white people--we should err on the side of killing black people...in fact, we should steal organs from black people and give them to white people

Thinking radically utilitarian it would be an option, maybe. But I think a lot of people would feel bad if that happened so even if you're a radical utilitarian it probably won't be an option.

I'm joking of course...I don't think the solutions lie in utilitarianism...I think we would rather our cars not kill people based on their social worth or their happiness or whatever, that's just a recipe for a shitty paranoid world

 

Although we should probably have heuristics for hitting a 4-year kid vs a 100-year person with stage 4 colon cancer (I would love to see the CCTV footage of that)

I think a mild utilitarism isn't bad. I'd call it non-egocentric pragmatism


  On 9/24/2016 at 11:59 PM, xox said:

darreichungsform, what's with the signature? da faq

it means "kill all disbelievers"

  On 9/25/2016 at 12:30 AM, darreichungsform said:

 

  On 9/24/2016 at 11:59 PM, xox said:

darreichungsform, what's with the signature? da faq

it means "kill all disbelievers"

 

no it doesn't

  On 9/25/2016 at 7:19 PM, Blank said:

 

  On 9/25/2016 at 6:39 PM, darreichungsform said:

Butt

 

 

  On 1/19/2020 at 5:27 PM, Richie Sombrero said:

Nah, you're a wee child who can't wait for official release. Embarrassing. Shove your privilege. 

  On 9/2/2014 at 12:37 AM, Ivan Ooze said:

don't be a cockroach prolapsing nun bulkV

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   1 Member

×
×