Author Topic: If We Created an AI with Free Will, who would be responsible for its actions?  (Read 1782 times)

0 Members and 1 Guest are viewing this topic.

Offline Seppuku

  • Fellow
  • *******
  • Posts: 3855
  • Darwins +125/-1
  • Gender: Male
  • I am gay for Fred Phelps
    • Seppuku Arts
A hypothetical situation here for you. Might be interesting to hear responses from theists too as this subject involves creation.

Lets say we've got to the point where we've developed a true AI and that AI had its own free will. But we've programmed in various traits, for example, they can get annoyed, feel anger and various other emotions. Certain desires. Certain dependencies (for example, that acquisition of energy to power them). We lead them out into the world with only a certain set of rules and to obey our laws, but rather than to program these rules into them, allow them to choose.

I've got a few questions on people views:

-If say, 20 of these robot AIs broke one of our laws and decided to shoot up a school each. Who is to blame? The AI or the programmers?
-If these robot AI went to war against humans. Who is responsible? The AIs? Or the creators for giving them that ability?
-What if these AI stole because they needed to supply themselves with an energy resource and had no other means of acquiring it. Who's fault is it?
-If a robot AI kills somebody because that person triggered the part of its programming that represents anger. Who is in the wrong? The guy for provoking the AI? The AI itself or the people who programmed the AI in such a way?
-If there is something faulty in the AI's design, for example, what if they spontaneously combust? Who's fault is it? Nobody's, because it's out of people's control and therefore natural. It was faulty programming or faulty engineering, so programmers or engineers are to blame? Or it was a design feature, therefore it was working as intented, it was just unfortunate that it happened to that particular unit.


If these true AI have what we'd refer to as 'emotions' programmed inside of them and resembled our species in many ways and could even had a system of nerves. Heck, given them programming for love and compassion
-Should the engineers have the right to permanently deactivate them? Even when one machine 'loves' the other.
-Should the engineers have the right to stress test their nervous system? Therefore testing their receptiveness to physical stressors. Akin to torturing a person to see how much pain they can handle.
-Should the engineers have the right to push their emotional programming? To essentially psychological torture these AIs.
-Should the engineers have the right to induce fear into these AIs through threats of deactivation or even suffering should they break any of our laws?
“It is difficult to understand the universe if you only study one planet” - Miyamoto Musashi
Warning: I occassionally forget to proofread my posts to spot typos or to spot poor editing.

Offline Nick

  • Laureate
  • *********
  • Posts: 10294
  • Darwins +177/-8
  • Gender: Male
I would give them free will, tell them they have to do as I say without a lot of specific info, and send the ones that don't to the melting pit. ;)
Yo, put that in your pipe and smoke it.  Quit ragging on my Lord.

Tide goes in, tide goes out !!!

Offline Graybeard

  • Global Moderator
  • ******
  • Posts: 6581
  • Darwins +514/-18
  • Gender: Male
  • Is this going somewhere?
The answer is the same for any machine that man makes: the maker is liable for damages caused by their product's negligence.

(Just an a commercial proposition, these robots seems a failure - who would buy one? There are enough arsesholes in the world without bad-tempered, hysterical robots. It reminds me of a fictitious Victorian product, "Arkwright's Famous Steam Powered Bed-wetter.")
Nobody says “There are many things that we thought were natural processes, but now know that a god did them.”

Offline Garja

  • Postgraduate
  • *****
  • Posts: 759
  • Darwins +38/-0
  • Gender: Male
  • WWGHA Member
It does seem like the programmers would have no motivation to create these AI apart from seeing if they could.

Lets assume that the program is flawless and they are legitimately free agents.  I think the public or government would have to decide in advance if we, as a society, want to create these AI.  If we did this... say to serve the needs of a labor shortage, yet for some reason wanted these machines to have freewill.  Then I think the engineers are NOT responsible for the actions of their creation.  If they create them on their own accord just for the hell of it then they ARE responsible.  As for the morality questions; again, if truly an AI they should have the same level of respect as organic life.

Side note: been thinking a lot about Mass Effect havent you?
"If we look back into history for the character of the present sects in Christianity, we shall find few that have not in their turns been persecutors, and complainers of persecution."

-Benjamin Franklin

Offline Azdgari

  • Laureate
  • *********
  • Posts: 12222
  • Darwins +268/-31
  • Gender: Male
Free agents are not entirely predictable.  That's what makes them appear to be free agents.

Hurricanes are also not entirely predictable.  For all intents and purposes, one could take a hurricane to be a crazy free agent, consciousness aside.

What if someone made a device that was able to create a hurricane, after which one of these created storms unexpectedly went and ravaged a city on the eastern seaboard?  Would the programmer/engineer of the machine be responsible, or would the hurricane be responsible?

Perhaps creating such unpredictable constructs should be illegal from the outset.  That's what engineers are supposed to avoid, after all.
The highest moral human authority is copied by our Gandhi neurons through observation.

Offline Garja

  • Postgraduate
  • *****
  • Posts: 759
  • Darwins +38/-0
  • Gender: Male
  • WWGHA Member
^ Not sure I agree with your argument.

We are saying that these AI have "I"  a hurricane does not.  Also, in your scenario the engineer is Cobra Commander creating a weather device.  ;)
"If we look back into history for the character of the present sects in Christianity, we shall find few that have not in their turns been persecutors, and complainers of persecution."

-Benjamin Franklin

Offline Azdgari

  • Laureate
  • *********
  • Posts: 12222
  • Darwins +268/-31
  • Gender: Male
I know that I'm omitting consideration of the "I".  The OP did not ask about the "I", however; it asked about free will.

Metaphysical free will does not exist for machines or for hurricanes, any more than it exists for human beings.  What humans and machines have is unpredictability.

For clarity in judging for responsibility, consider only the programmer/engineer.  (S)he is creating something unpredictable and potentially dangerous to human life.  That is irresponsible, and in these cases, should be illegal on those grounds.  (S)he is responsible for the decision to create the machine/hurricane, and thus for inviting the risk associated with an unpredictable construct.

This is true regardless of whether the machine, or the hurricane, has feelings.
The highest moral human authority is copied by our Gandhi neurons through observation.

Offline Seppuku

  • Fellow
  • *******
  • Posts: 3855
  • Darwins +125/-1
  • Gender: Male
  • I am gay for Fred Phelps
    • Seppuku Arts
Quote from: Garja
It does seem like the programmers would have no motivation to create these AI apart from seeing if they could.

Lets say they could benefit from them as workers and even other purposes, not as mindless workers following instructions, but are capable of having initiative, free thought, creativity and heck, even emotion (compassion is good for care work for example) and essentially are there to mimic humans. I suppose the scientists themselves did it to see if they could. But in terms of their usefulness, there's examples where these would be useful, so they have a purpose.

Quote from: Garja
Lets assume that the program is flawless and they are legitimately free agents.  I think the public or government would have to decide in advance if we, as a society, want to create these AI.  If we did this... say to serve the needs of a labor shortage, yet for some reason wanted these machines to have freewill. 

Fair enough, we can implement the government in this.

Quote from: Garja
Side note: been thinking a lot about Mass Effect havent you?

Lol, possible it had its influence, but I was talking about the human brain with MM via email and kinda went onto the topic of AIs. Posted this after having a session of programming. Though the Geth probably influences the second set of questions.

Quote from: Azdgari
I know that I'm omitting consideration of the "I".  The OP did not ask about the "I", however; it asked about free will.

This is true, I did not. It strays from my original questions, but I think your point is still relevent.
“It is difficult to understand the universe if you only study one planet” - Miyamoto Musashi
Warning: I occassionally forget to proofread my posts to spot typos or to spot poor editing.

Offline Garja

  • Postgraduate
  • *****
  • Posts: 759
  • Darwins +38/-0
  • Gender: Male
  • WWGHA Member
I still dont like the parallel to the hurricane.  I get it, but I just dont like it.  A weather phenomena may act, but I certainly dont think it has will, nor do I concede the point that it is free.  Any weather pattern is governed by High and low pressure systems, ambient temperature and humidity, elevation, etc.  Just because we cannot model its behavior exactly I dont think means it has freewill.

I concede the other point that a creator ... sorry, programmer is responsible for the actions of its creation because my explanation on how the person would escape responsibility sounds convoluted even to me.
"If we look back into history for the character of the present sects in Christianity, we shall find few that have not in their turns been persecutors, and complainers of persecution."

-Benjamin Franklin

Offline Azdgari

  • Laureate
  • *********
  • Posts: 12222
  • Darwins +268/-31
  • Gender: Male
The problem with how you're treating "free will" is that it apparently violates all of the physical constraints you've cited.  But will is a product of the brain (or of the machine's motherboard, etc.), and that's a physical object which obeys physical laws.  What is different, in that respect, between a brain and a computer?
The highest moral human authority is copied by our Gandhi neurons through observation.

Offline Garja

  • Postgraduate
  • *****
  • Posts: 759
  • Darwins +38/-0
  • Gender: Male
  • WWGHA Member
The problem with how you're treating "free will" is that it apparently violates all of the physical constraints you've cited.  But will is a product of the brain (or of the machine's motherboard, etc.), and that's a physical object which obeys physical laws.  What is different, in that respect, between a brain and a computer?

I am saying that there is not a difference between a brain and a sophisticated enough AI.  I recognize that both are physical objects and are constrained by physical laws... for instance I cannot fly, but I can WANT to fly, and I can choose to reply to forum posts that I dont feel particularly strongly about but feel they are good mental exercise, or I can just let those posts go un-typed. 
« Last Edit: August 16, 2012, 10:01:33 PM by Garja »
"If we look back into history for the character of the present sects in Christianity, we shall find few that have not in their turns been persecutors, and complainers of persecution."

-Benjamin Franklin

Offline Azdgari

  • Laureate
  • *********
  • Posts: 12222
  • Darwins +268/-31
  • Gender: Male
A brain, a CPU, and a hurricane are all physical objects following physical laws.  None of them are free of the constraints you cited as being reasons why a hurricane can't be treated as free.  I'm saying that there is no difference, with respect to metaphysical freedom of will, between a hurricane and the human brain.

Do you disagree?  Your own points argue in the same direction.
The highest moral human authority is copied by our Gandhi neurons through observation.

Offline Avatar Of Belial

  • Graduate
  • ****
  • Posts: 499
  • Darwins +30/-1
  • Gender: Male
  • I'm not an Evil person; I just act like one!
Why not treat the AI like a human from the start? And I mean from the start - humans are not immediately considered responsible for their own actions, the parents are. Childhood for humans would be a 'trial-period' for true AI - which would have to include the ability to learn.

Thus the parent/guardian is little different from the builder/buyer and responsibility would be determined similarly. There would be small modifications - an AI is expected to learn (social mores/reasons) faster and has no physical maturation unless we switch its body - so we can assume a faster rate of independence, for example.
"You play make-believe every day of your life, and yet you have no concept of 'imagination'."
I do not have "faith" in science. I have expectations of science. "Faith" in something is an unfounded assertion, whereas reasonable expectations require a precedent.

Offline Barracuda

Seppuku, there are a couple of reason why these are meaningless questions.

1) What are you saying when labeling someone as "responsible" or "to blame" for something? How would a universe in which being X is responsible for result Y look different from a universe in which being X is not responsible for result Y?

2) The concept of free will makes no sense. If it obeys physical laws, then it is a slave to those laws (just like humans are) and does not have free will (just like humans don't). If it's actions don't obey any laws, assuming such a pure randomness is possible, then they are completely random and therefore could not be said to be a "choice," ergo this thing still won't have free will.

Offline Azdgari

  • Laureate
  • *********
  • Posts: 12222
  • Darwins +268/-31
  • Gender: Male
Why not treat the AI like a human from the start? And I mean from the start - humans are not immediately considered responsible for their own actions, the parents are. Childhood for humans would be a 'trial-period' for true AI - which would have to include the ability to learn.

Because we can predict more or less how a human might develop.  We don't have the experience to do that with an AI.  And there are minds that are not held responsible for their associated actions.  The severely mentally handicapped, for example.  Or - to an extent - animals.

To what degree do we hold dogs responsible for getting into the garbage?
The highest moral human authority is copied by our Gandhi neurons through observation.

Offline Avatar Of Belial

  • Graduate
  • ****
  • Posts: 499
  • Darwins +30/-1
  • Gender: Male
  • I'm not an Evil person; I just act like one!
Because we can predict more or less how a human might develop.  We don't have the experience to do that with an AI.  And there are minds that are not held responsible for their associated actions.  The severely mentally handicapped, for example.  Or - to an extent - animals.

We may not have experience now, but that would come with time and shape the "modifications" I mentioned.

To what degree do we hold dogs responsible for getting into the garbage?

To the degree we have spent training them to not. We can't expect a puppy to know better, but as the dog grows it is the responsibility of the dog's caretaker to shape the dog's behavior; just as a parent/guardian shapes a human child's behavior. If the parent/guardian/caretaker/programmer/whatever fails to provide instruction, then responsibility is shared. A dog who was not properly trained to not delve into the garbage cannot be expected to refrain from doing so, but the dog is still the one that does it. Thus both the dog and the owner are to blame.
"You play make-believe every day of your life, and yet you have no concept of 'imagination'."
I do not have "faith" in science. I have expectations of science. "Faith" in something is an unfounded assertion, whereas reasonable expectations require a precedent.

Offline Garja

  • Postgraduate
  • *****
  • Posts: 759
  • Darwins +38/-0
  • Gender: Male
  • WWGHA Member
A brain, a CPU, and a hurricane are all physical objects following physical laws.  None of them are free of the constraints you cited as being reasons why a hurricane can't be treated as free.  I'm saying that there is no difference, with respect to metaphysical freedom of will, between a hurricane and the human brain.

Do you disagree?  Your own points argue in the same direction.

I do disagree... I think.  I may be equating "will" with "consciousness" more than you are for the purposes of this discussion.  In any case, yes I believe that humans, hurricanes, and our hypothetical AI are all acted upon and subject to physical law.  What I am saying is that if I am standing on the east coast of the US I can make a willful choice to go north, or I can choose to go south.  The hurricane does not have a choice where it goes as it is controlled by water currents, Coriolis effect etc.

Similarly I am saying that a sufficiently advanced AI having, as far as we can tell true ability to make choices is more similar to a human than to a hurricane.
"If we look back into history for the character of the present sects in Christianity, we shall find few that have not in their turns been persecutors, and complainers of persecution."

-Benjamin Franklin

Offline Azdgari

  • Laureate
  • *********
  • Posts: 12222
  • Darwins +268/-31
  • Gender: Male
The OP never mentioned consciousness.
The highest moral human authority is copied by our Gandhi neurons through observation.

Offline Samothec

  • Postgraduate
  • *****
  • Posts: 585
  • Darwins +49/-2
  • Gender: Male
You have some hidden assumptions in this premise that I think are faulty. AFAIK no countries laws totally absolve people from the death and destruction their machinery causes. There need to be situations where the owner/creator/programmer clearly had no influence/control over the situation that caused the death and destruction. Which means that in all those cases, as the laws currently stand, the owners/creators/programmers would be held responsible in almost every case/incident.

Also, anyone who builds and programs an autonomous device capable of real world interaction would be expected to include Asimov's 3 laws of robotics. Not doing so would likely result in blame being directed at the builder/programmer. Going a step further, not only would Asimov's 3 laws of robotics need to be included, "morality" beyond those laws would have to be included before anyone would begin to consider allowing such a device to be considered as its own independent being.

Another hidden assumption is that someone would want to create a mechanical version of a human complete with all our flaws – instead of improving upon us. A complete model of the human mind, fine. A complete physical simulacrum, fine. Both together? Why?

This is not intended as a harsh criticism. This is meant to point out that it appears you have bought into the sloppy work of others and do not realize the inherent problems.


Ignoring those problems.
As implied above, some of the answers will depend upon the work put into legally declaring these artificial beings as independent. If they have been declared independent then their builders/programmers should not be held responsible legally.  If they have not been declared independent then, obviously, their builders/programmers should be held legally responsible.

Yes, this avoids the question about free will but then free will rarely enters into the discussions about who is responsible for people in those same situations. It is almost always a discussion about mental capacity - one's awareness and morality.
« Last Edit: August 16, 2012, 10:59:50 PM by Samothec »
Faith must trample under foot all reason, sense, and understanding. - Martin Luther

Offline Seppuku

  • Fellow
  • *******
  • Posts: 3855
  • Darwins +125/-1
  • Gender: Male
  • I am gay for Fred Phelps
    • Seppuku Arts
Quote from: Samothec
You have some hidden assumptions in this premise that I think are faulty. AFAIK no countries laws totally absolve people from the death and destruction their machinery causes. There need to be situations where the owner/creator/programmer clearly had no influence/control over the situation that caused the death and destruction. Which means that in all those cases, as the laws currently stand, the owners/creators/programmers would be held responsible in almost every case/incident.

From a legal perspective, I think you're absolutely right. I was thinking of it as more of a moral conundrum than a legal one. Though obviously, if we were to think about it more realistically, we'd include how the law would hold one responsible. There is no law that would hold an AI to account, but would put the creator at fault (as they created something faulty/dangerous). But I wonder, why is it we'd hold the creator to account by law in those cases, but in the Christian perspective, God cannot be held to account for the 'evil' done by humans, as all responsibility is held on us. We did not create ourselves nor did we write our programming. By the creationist argument, God did. Yet if a human creates it is the human who is responsible. I wonder if a creationist (which is why I am hoping for some theist input) would argue different to the law and different to how we've argued based on their belief in God?

This is also why Azdgari's points are relevent, whilst he is not talking about an 'I', he is talking about something created. In Creationism, God created the weather systems and yet is not held to account for them. In Azdgari's situation, the creator would be. I would be interested in knowing why we as humans would be responsible for our creations when they have a mind of their own, but a deity wouldn't.

Quote
Another hidden assumption is that someone would want to create a mechanical version of a human complete with all our flaws – instead of improving upon us. A complete model of the human mind, fine. A complete physical simulacrum, fine. Both together? Why?

I think that is a fair concern. Why would we build them flawed? Fom a practical perspective, it'd be hard to argue that the flaws should be kept in order for them to serve their purpose and as you say, for a model of the human mind it doesn't need to be combined with a physical simulacrum, as that experiment could be more controlled. But I suppose we could also ask, if humans were created, then why did our creator program us with our flaws?

Perhaps having a balance of negative and positive attributes helps with the case of having free will. If we were to give an AI true free will, would we give it only positive traits? Because ultimately its decisions would be based on only doing 'positive' things and no choice to do 'wrong' exists. Why would we give an AI free will? The only answer I can think of that a human creator would have is as a social experiment, to understand free will itself and to understand even ourselves and our perception of morality in a real world situation with real world stressors (in psychology, lab experiments aren't considered the most accurate), naturally with the ethics in place by law such a thing wouldn't be legal, but that wouldn't stop somebody from doing it. In the case of some science fiction, free will can occur just as an accident and not through design. Maybe even we sympathesise with these creations enough to say they deserve to have free will, maybe we view them as more than machines, perhaps a foolish concept, but one possibility. Ultimately, in my hypothetical situation the AIs would have free will. I don't think they would be a true intelligence without it.

Quote
If they have been declared independent then their builders/programmers should not be held responsible legally.  If they have not been declared independent then, obviously, their builders/programmers should be held legally responsible.

This is the kind of answer I was looking for. I think it's interesting you give that response (because it's not the one I would have given :)) but I do have to ask, why? The builders gave them these flaws and even to a degree, built them faulty (reference to question where they spontaneously combust) and the builders gave them free will and in essence gave them that independence of thought. You suggested that realistically speaking we would build them to be better than us, but if we coded them with the same flaws and could forsee them exhibiting the same behaviour, then surely the responsibility would fall on the creators, because if they are trying to mimic human behaviour then how could they not forsee things like school shoots, theft, war and all of the other crimes against humanity humans themselves commit?



The discussion here also applies to God too or any creator deity. Should God be responsible for us, his creations? Should he be responsible for our flaws because it's how he created us? Should God be responsible the flaws our own physical beings have like disease? Why has God created us in such a way, when in his omniscience he would have foreseen the effects? Does God have the right to destroy us because he created us? Does God have the right to torture us or cause us pain because he created us? Does God have the right to play with our emotions or even have us cower in fear because he created us?

I think it works both ways too, because when we pose the 'Problem of Evil' argument, if we assume God exists, then we are essentially holding him to account - why does evil exist? Would this apply to us when it comes to our creations? It would be interesting to see an android hold us to account for the cruelties they've committed against each other because we created them that way.

I am curious as to how the attitudes differ (if they do) for somebody's attitude towards a human creating something and a deity creating something.
“It is difficult to understand the universe if you only study one planet” - Miyamoto Musashi
Warning: I occassionally forget to proofread my posts to spot typos or to spot poor editing.

Offline jaimehlers

  • Fellow
  • *******
  • Posts: 4768
  • Darwins +546/-14
  • Gender: Male
  • WWGHA Member
I'd say it's the programmer's responsibility, to a point.  We don't hold parents responsible for the actions of their children past a certain point, either, and the two situations are simply not that much different.

Offline magicmiles

  • Fellow
  • *******
  • Posts: 2829
  • Darwins +175/-73
  • Gender: Male
We don't hold parents responsible for the actions of their children past a certain point, either, and the two situations are simply not that much different.

I was thinking that same thing. As potential parents, we know there is a risk our children will do great harm despite our best efforts to prevent it. It doesn't prevent us from creating new life, and I think we're driven by more than a biological urge to do so.

Seppuku, I will try and give some input on your OP and follow up comments when I have a bit more time to think it through.
The 2010 world cup was ruined for me by that slippery bastard Paul.

Offline plethora

  • Fellow
  • *******
  • Posts: 3456
  • Darwins +60/-1
  • Gender: Male
  • Metalhead, Family Man, IT Admin & Anti-Theist \m/
I subscribe to the argument that there is no such thing as "free will" and that it is a logically impossible notion. So in reality, from a philosophical standpoint, I do not think anyone is to blame for anything really.

Of course, we do assign blame when it comes to practical reality... despite it being a philosophically incorrect view.

Since we assign degrees of blame and responsibility rather arbitrarily ... and we do so largely based on our own subjective morality .. what you are asking will produce different responses. None of them correct really because the act of assigning blame is logically flawed from the start.

Personally, playing along with this hypothetical scenario, using my practical day to day fundamentally flawed human perspective, I would say there is shared 'blame' between the designer and the agent.
The truth doesn't give a shit about our feelings.

Offline jaimehlers

  • Fellow
  • *******
  • Posts: 4768
  • Darwins +546/-14
  • Gender: Male
  • WWGHA Member
The question of "free will" should be out of bounds for this discussion, because it never goes anywhere; it always boils down to one side arguing that it cannot exist, and the other side arguing that it must exist, assuming the two sides are of comparable stubbornness.  Let's talk about the ability to make decisions instead.

In short, we're talking about an AI that can learn and make decisions on its own, without depending on input from a programmer.  I think this is called heuristic programming.  Once you get to that point, the AI can become much more capable much more quickly, because it operates on the nanosecond scale (a billionth of a second).  Just for comparison, a billion seconds is more than three decades; meaning an AI can acquire an equivalent amount of experience in a second or two.  Once you get to the point where an AI can heuristically improve itself and its decision-making ability, you can't reasonably hold programmers responsible for what it does beyond that, except in the sense that they laid the foundation.

It's the same reason you can't hold parents responsible for what their adult offspring do once they've moved out on their own, except in the sense that they laid the foundations.  In fact, it's likely that a programmer would have even less ability to affect the AI than a parent would their children, because by the time the programmer has had time to do something, the AI has already had several relative decades to do other stuff in the meantime.

Offline plethora

  • Fellow
  • *******
  • Posts: 3456
  • Darwins +60/-1
  • Gender: Male
  • Metalhead, Family Man, IT Admin & Anti-Theist \m/
^^^ Very well said, jaimehlers. I can agree with the above.
The truth doesn't give a shit about our feelings.

Offline Samothec

  • Postgraduate
  • *****
  • Posts: 585
  • Darwins +49/-2
  • Gender: Male
In Creationism, God created the weather systems and yet is not held to account for them. In Azdgari's situation, the creator would be. I would be interested in knowing why we as humans would be responsible for our creations when they have a mind of their own, but a deity wouldn't.
...But I suppose we could also ask, if humans were created, then why did our creator program us with our flaws?
...The discussion here also applies to God too or any creator deity. Should God be responsible for us, his creations? Should he be responsible for our flaws because it's how he created us? Should God be responsible the flaws our own physical beings have like disease? Why has God created us in such a way, when in his omniscience he would have foreseen the effects? Does God have the right to destroy us because he created us? Does God have the right to torture us or cause us pain because he created us? Does God have the right to play with our emotions or even have us cower in fear because he created us?

I think it works both ways too, because when we pose the 'Problem of Evil' argument, if we assume God exists, then we are essentially holding him to account - why does evil exist?

This is where I differ with some people: I do hold the creator responsible - if one exists. I don't see any evidence for a benevolent creator. I do see possible evidence for a malevolent creator but I ignore it since the evidence also points to a bloody fate for me (but it's not my blood). I am good because it is the human/humane thing to do. I will not submit to god because I choose not to kill other people.

Why the different attitudes towards a human creator of an autonomous AI and god? Because a human can make mistakes, big ones.


Quote
Perhaps having a balance of negative and positive attributes helps with the case of having free will. If we were to give an AI true free will, would we give it only positive traits? Because ultimately its decisions would be based on only doing 'positive' things and no choice to do 'wrong' exists. Why would we give an AI free will? The only answer I can think of that a human creator would have is as a social experiment, to understand free will itself and to understand even ourselves and our perception of morality in a real world situation with real world stressors (in psychology, lab experiments aren't considered the most accurate), naturally with the ethics in place by law such a thing wouldn't be legal, but that wouldn't stop somebody from doing it.

"Look before you leap." "The road to Hell is paved with good intentions."
These both effectively mean the same thing[1]. You can program an AI with only positive traits and an ability to make choices then have it unintentionally do things wrong.


Quote
In the case of some science fiction, free will can occur just as an accident and not through design.
Usually bad science fiction.


Quote
The builders gave them these flaws and even to a degree, built them faulty (reference to question where they spontaneously combust) and the builders gave them free will and in essence gave them that independence of thought. You suggested that realistically speaking we would build them to be better than us, but if we coded them with the same flaws and could forsee them exhibiting the same behaviour, then surely the responsibility would fall on the creators, because if they are trying to mimic human behaviour then how could they not forsee things like school shoots, theft, war and all of the other crimes against humanity humans themselves commit?

This is part of why I don't think we will grant independence to any AI. We know that any AI not built to be better than us will be a potential danger and even those built to be better could be faulty or reprogrammed (whether by humans or via learning) to become a danger.
 1. Although the "good intentions" line is often misused so it becomes synonymous with "The ends justify the means."
Faith must trample under foot all reason, sense, and understanding. - Martin Luther

Offline Noman Peopled

  • Reader
  • ******
  • Posts: 1904
  • Darwins +24/-1
  • Gender: Male
  • [insert wittycism]
bm
"Deferinate" itself appears to be a new word... though I'm perfectly carmotic with it.
-xphobe

Offline inveni0

  • Postgraduate
  • *****
  • Posts: 556
  • Darwins +11/-1
    • iMAGINARY god
The worst thing about this hypothetical situation is that with God and Man, God KNEW Jeffrey Dahmer was going to kill people.  God KNEW that child molesters would horrible disfigure and emotionally scar millions of children over the years.  It's really sick, if you think about it.
http://www.imaginarygod.com

My book designed to ease kids into healthy skepticism is available for pre-order. http://www.peterskeeter.com

Offline joebbowers

  • Reader
  • ******
  • Posts: 1074
  • Darwins +91/-47
  • Gender: Male
    • My Photography
  • User is on moderator watch listWatched
But Inveni0, Christians can simultaneously claim that God is both responsible and not responsible for our actions. When people do bad things, it was free will, when people do good things, it was God's plan. This absolves God of any responsibility and absolves the Christian of examining God's intentions.
"Do you see a problem with insisting that the normal ways in which you determine fact from fiction is something you have to turn off in order to maintain the belief in God?" - JeffPT