Here's one interesting point:
In the 1970s, Benjamin Libet, a physiologist at the University of California, San Francisco, wired up the brains of volunteers to an electroencephalogram and told the volunteers to make random motions, like pressing a button or flicking a finger, while he noted the time on a clock.So, the decision is made and then conveyed to the consciousness. Interesting. According to a certain Dr. Silberstein, indirectly quoted by NYT, "every physical system that has been investigated has turned out to be either deterministic or random." That would put the lie to any sort of predetermined path (since there is an element of random probability), but at the same time, still would preclude any sort of conscious decision-making element.
Dr. Libet found that brain signals associated with these actions occurred half a second before the subject was conscious of deciding to make them.
As far as I can read the text, it seems that one of the major debates is about the idea of increasing complexity. That is, while (theoretically) knowing what is going on at a subatomic level enables you to make predictions, does increasing complexity of institutions create its own set of new rules? As a polisci major I would juxtapose this to the complexity of global institutions vs. local ones. It is relatively easy to predict events in a town of 8 people, but harder in a larger area with more institutions such as the state of Texas, and harder yet with the entire world, which has its own, self-determining institutions such as the UN and sovereign states. I think a comparison to biology would invoke individual cells vs. organs vs. an entire organism. The major point is that the new institutions or larger organisms are self-regulating in a way that fundamentally changes the old rules.
To quote: "In 1930, the Austrian philosopher Kurt Gödel proved that in any formal system of logic, which includes mathematics and a kind of idealized computer called a Turing machine, there are statements that cannot be proven either true or false." So computers cannot tell you how long it will take them to perform an action or the result of a computation, because the process of finding that answer is the same thing as performing the computation. I think that translates well into reality: The only way to know what the result of your action is, is to carry out the action.
So then, my take: I find such philosophical arguments to detract from the question of what we are doing here and now. After all, regardless of whether or not we actually have free will, we're under the illusion, aren't we? We might as well use that illusion, if it is one, to make things better. There's nothing to lose.