The question of "free will" should be out of bounds for this discussion, because it never goes anywhere; it always boils down to one side arguing that it cannot exist, and the other side arguing that it must exist, assuming the two sides are of comparable stubbornness. Let's talk about the ability to make decisions instead.
In short, we're talking about an AI that can learn and make decisions on its own, without depending on input from a programmer. I think this is called heuristic programming. Once you get to that point, the AI can become much more capable much more quickly, because it operates on the nanosecond scale (a billionth of a second). Just for comparison, a billion seconds is more than three decades; meaning an AI can acquire an equivalent amount of experience in a second or two. Once you get to the point where an AI can heuristically improve itself and its decision-making ability, you can't reasonably hold programmers responsible for what it does beyond that, except in the sense that they laid the foundation.
It's the same reason you can't hold parents responsible for what their adult offspring do once they've moved out on their own, except in the sense that they laid the foundations. In fact, it's likely that a programmer would have even less ability to affect the AI than a parent would their children, because by the time the programmer has had time to do something, the AI has already had several relative decades to do other stuff in the meantime.