We can't draw any practical conclusions or implications from the truth or falsity of the idea of free will.
Some critics of the idea of free will seem to think that this issue has important implications for the penal system and for how we treat criminals. If they could not have done otherwise, what good do we do by saying to them, "You should have done otherwise"? They seem to think that I might as well scold my Roomba for missing that spot in the corner and punish it by putting it in a box.
Let's assume they are right, that persons are just unusually complicated robots.
Whether we have free will or not, we can't change the past. "You should not have done that" never means "go back, erase what happened and do it differently". What we say to each other about the past, and what we do to each other in reaction to the past all orient on the future.
If we are all robots, we are complicated robots that change their behavior depending on the complicated inputs we receive. "You should not have done that" might or might not cause a particular robot to change its behavior in the future. This is an empirical question depending on the programming of the robot and the other inputs, not on the truth or appropriateness of any moral judgement expressed. It is a counterfactual and an input to our processing. It has the (perhaps?) unusual property that it might work if the robot believes it, whether or not it is literally true. (Many social norms have a similar characteristic, that if everyone expects things to work in a certain way, and acts accordingly, things will work that way.)
This case presents a perplexing (perhaps rare) philosophical situation, where a statement about reality seems to make a strong distinction between how things are and how they are not, but persons who believe one variant have no reason to act differently from their philosophical opponents.
We can imprison people because we think they have free will and "deserve" it, hoping that they will "repay their debt to society" and perhaps learn from it, gaining some rehabilitation; or we can imprison people because we think they are robots and being imprisoned alters their programming in a positive way while keeping them away from innocent persons they might hurt. We can scold people because we think they have free will and scolding might change their attitudes, or we can scold them because we think they are robots and scolding might alter their programs. The truth of free will does not enter into the equation. People learn from experience and change their behavior in either case. If we can discover technical or environmental supports that help people act more responsibly toward each other, we can use them whether or not people have free will. Free will is not empirically observable, responses to interventions are.
Maybe "You have free will" is a lie. But maybe it is the sort of programming that helps hairy robots grow and develop. Or maybe it is harmful or irrelevant. These are empirical questions, but the empirical result does not reveal the truth value of the statement "I have free will".