You have some hidden assumptions in this premise that I think are faulty. AFAIK no countries laws totally absolve people from the death and destruction their machinery causes. There need to be situations where the owner/creator/programmer clearly had no influence/control over the situation that caused the death and destruction. Which means that in all those cases, as the laws currently stand, the owners/creators/programmers would be held responsible in almost every case/incident.
From a legal perspective, I think you're absolutely right. I was thinking of it as more of a moral conundrum than a legal one. Though obviously, if we were to think about it more realistically, we'd include how the law would hold one responsible. There is no law that would hold an AI to account, but would put the creator at fault (as they created something faulty/dangerous). But I wonder, why is it we'd hold the creator to account by law in those cases, but in the Christian perspective, God cannot be held to account for the 'evil' done by humans, as all responsibility is held on us. We did not create ourselves nor did we write our programming. By the creationist argument, God did. Yet if a human creates it is the human who is responsible. I wonder if a creationist (which is why I am hoping for some theist input) would argue different to the law and different to how we've argued based on their belief in God?
This is also why Azdgari's points are relevent, whilst he is not talking about an 'I', he is talking about something created. In Creationism, God created the weather systems and yet is not held to account for them. In Azdgari's situation, the creator would be. I would be interested in knowing why we as humans would be responsible for our creations when they have a mind of their own, but a deity wouldn't.
Another hidden assumption is that someone would want to create a mechanical version of a human complete with all our flaws – instead of improving upon us. A complete model of the human mind, fine. A complete physical simulacrum, fine. Both together? Why?
I think that is a fair concern. Why would we build them flawed? Fom a practical perspective, it'd be hard to argue that the flaws should be kept in order for them to serve their purpose and as you say, for a model of the human mind it doesn't need to be combined with a physical simulacrum, as that experiment could be more controlled. But I suppose we could also ask, if humans were created, then why did our creator program us with our flaws?
Perhaps having a balance of negative and positive attributes helps with the case of having free will. If we were to give an AI true free will, would we give it only positive traits? Because ultimately its decisions would be based on only doing 'positive' things and no choice to do 'wrong' exists. Why would we give an AI free will? The only answer I can think of that a human creator would have is as a social experiment, to understand free will itself and to understand even ourselves and our perception of morality in a real world situation with real world stressors (in psychology, lab experiments aren't considered the most accurate), naturally with the ethics in place by law such a thing wouldn't be legal, but that wouldn't stop somebody from doing it. In the case of some science fiction, free will can occur just as an accident and not through design. Maybe even we sympathesise with these creations enough to say they deserve to have free will, maybe we view them as more than machines, perhaps a foolish concept, but one possibility. Ultimately, in my hypothetical situation the AIs would have free will. I don't think they would be a true intelligence without it.
If they have been declared independent then their builders/programmers should not be held responsible legally. If they have not been declared independent then, obviously, their builders/programmers should be held legally responsible.
This is the kind of answer I was looking for. I think it's interesting you give that response (because it's not the one I would have given

) but I do have to ask, why? The builders gave them these flaws and even to a degree, built them faulty (reference to question where they spontaneously combust) and the builders gave them free will and in essence gave them that independence of thought. You suggested that realistically speaking we would build them to be better than us, but if we coded them with the same flaws and could forsee them exhibiting the same behaviour, then surely the responsibility would fall on the creators, because if they are trying to mimic human behaviour then how could they not forsee things like school shoots, theft, war and all of the other crimes against humanity humans themselves commit?
The discussion here also applies to God too or any creator deity. Should God be responsible for us, his creations? Should he be responsible for our flaws because it's how he created us? Should God be responsible the flaws our own physical beings have like disease? Why has God created us in such a way, when in his omniscience he would have foreseen the effects? Does God have the right to destroy us because he created us? Does God have the right to torture us or cause us pain because he created us? Does God have the right to play with our emotions or even have us cower in fear because he created us?
I think it works both ways too, because when we pose the 'Problem of Evil' argument, if we assume God exists, then we are essentially holding him to account - why does evil exist? Would this apply to us when it comes to our creations? It would be interesting to see an android hold us to account for the cruelties they've committed against each other because we created them that way.
I am curious as to how the attitudes differ (if they do) for somebody's attitude towards a human creating something and a deity creating something.