I'm not terribly qualified to really engage in free will debates and arguments. At best I have a Wikipedia-level of understanding when it comes to neurology, and probably even less of a background in understanding the various philosophical discourse regarding the subject. But I'm going to go ahead and throw my take out there...
It seems to me that there is semantic wiggle-room in regards to free will. 'Free' can have several connotations; it could mean that one's will exists independently from (free from) material and/or physical connection, but I find that view to be without evidence. If, however, we allow for a spectrum between free will and non-free will, I think we can define 'free' in this case to mean degrees of freedom of a system. I'm just not sure it makes a lot of sense to talk in terms of a hard line distinction between an entity that has free will and an entity that does not have free will. Does a photon have free will? Does a fruit fly have free will? Does a dog have free will? Does a human have free will?
If we think in terms of degrees of freedom, it may be possible to determine if some entity has free will by evaluating the total number of causal variables are involved in any resultant action. In terms of a photon, there are a relatively small, limited number of causal variables that dictate what the next state of the entity will be (velocity, spin, location in the universe, etc) from the current state of the entity. In terms of a fruit fly, the number of causal variables increases dramatically - due to a) the sheer number of discrete entities involved (number of molecules that make up the entity and their various states) and b) the sheer complexity of the system (the configuration of the brain of the fruit fly being substantially more complex than a lump of goo containing the same number of molecules). By the time you step up to humans, you have an immensely complex system that has a gargantuan number of causal variables involved. The 'out' in terms of a free will debate, I guess, is to say that an entity has free will when it is of sufficient complexity as to make it practically infeasible to precisely control or accurately predict the response of said entity from some given set of external stimuli. In a way, it's basically compatibilism without reference to concepts such as motivation or intent.
Of course, there a number of problems with this. First and foremost, I have neither expertise in neurology, information theory, or organic chemistry so much of the above is simply a 'from the hip' proposition. Secondly, the implication is that the concept of 'free will' is strictly a label that is attached to certain emergent phenomenon observed, and thus probably does not apply to a lot of free will/no free will debates. Thirdly, this spectrum of free will suffers from the 'I know it when I see it' problem, insofar as there are objective measurements one can make to determine if free will is present but there is no objective line drawn to make the distinction.
Anyway...just feeding more thoughts into the topic.