Monday, November 21, 2016

AI slaves

Question found on facebook: "If an AI was sufficiently advanced [...] would [we] recognize it's claim for self ownership? [...] If [we] didn't do this would it be slavery?"

I find the question interesting, but misguided. The question assumes that the AI will remain on an intellectual level beneath or comparable to humans. This seems unlikely. Can we prevent the AI from becoming smarter than us, self-enhancing until it can smash us like bugs, and the idea that a human could control it becomes absurd? What ethics will the AI obey, what physical limits will it face, what motivations does it feel, and how can we make sure this remains stable? If humans were capable of radical self-modification, of increasing their intelligence and ability to make use of resources, I have no doubt that some sick puppy would turn itself into a demon worthy of H.P. Lovecraft. Even an exemplary human, after having undergone such significant changes, might disappoint us and prove Lord Acton right. What prevents this from happening to an AI, an entity that presumably was born with one foot on that path?

But now that I've poured some cold water on it, let's address the question. What is the self that is owned? Someone owned the hardware that the intelligence developed within before the AI existed. Is it the ownership of the hardware that we want to know about? If enough of the hardware is unplugged, the AI will lose consciousness at least, "die" in the extreme.

If I developed a disease that required me to use a machine for life support, like an iron lung, would that automatically mean that I owned the iron lung? Would I no longer have the rigts of a self-owner if someone else owned the iron lung? What obligation does the owner of the iron lung have toward me with regard to maintaining the iron lung, not switching it off, not "evicting" me? Is this the same issue?
An AI could inhabit hardware that it does not own, just as I can inhabit a building I don't own. Under normal circumstances, the owner of the building is under no obligation to allow me to remain. I can't think of an obvious parallel, where if I was required to vacate a building I would simply cease to exist.
Does my answer change if we give the AI a robot body owned by someone else? I suppose parents provide the parallel. My parents gave me the food with which I maintained and grew my body as a child. The metaphor of self-ownership excludes the idea that they could continue to own the molecules that my body digested and incorporated into myself. Implicit within the act of feeding me lies the necessity that they give me a gift, not a loan. Why do I say that? This restricts the metaphor in an unnecessary way, at least at first glance.

My best answer is that ownership itself depends on a metaphor for my relationship to my body. That is where the concept of "mine" comes from, that my body belongs to me, that my thoughts are mine, that I am the author of my actions, that I have a self and an identity embodied by my body. Ownership of other things is a slightly broken metaphor based on this prototype.

(Persons who object to the self-ownership metaphor still need some comparable phrase to describe this phenomenon, unless they simply deny that persons in general have any sort of obligation to respect others' bodily integrity. That is to say, they deny murder and rape exist, we can only kill and have sex. I'm still looking for a better phrase, maybe "self-determination"? But that  probably will confuse even more people than "self-ownership.")

The AI perhaps differs in that we could possibly record its consciousness and store it, and  transfer it into a different robot body (perhaps of identical design). If we transfer the same stored consciousness into two identical robot bodies, are they the same person? It's enough to turn you into a Buddhist.

Or a dualist? Some people think that a fundamental difference between people and machines will remain, even when we can no longer detect that difference. (Chinese room, philosophical zombies?)

Would it qualify as slavery for me to own an AI's robot body? If it really thought like a human, it would not wish to be switched off, or have its parts used or removed without consent. So if I were able to switch it off or modify it at will, I would stand in a very similar relationship to the AI as a master does to a slave. 

Can I control its motivation and ethics? Could I act as its cult leader rather than its slave master? We distinguish between the two because a slave master can use physical threats to motivate a slave, a boss can use extrinsic rewards (but not physical threats) to motivate employees, cult gurus may use psychological manipulation to gain compliance, and an ethical leader (?) can use what? 
All leaders have a touch of guru in them. How can we motivate without manipulating? How can we be sure that intrinsic motivation comes from within the persons who feel it, and not from some trick that a demagogue used to invade their minds? Any sort of inspiration carries a risk of error. We can imagine many similar rousing speeches. One speech sounds from a manipulator who doesn't believe it but will benefit from it. One comes from a true believer, who wastes this sincerity on a cause that cannot succeed. One inhabits a true believer who has found a viable path to an admirable goal. How do we distinguish these? Dogma or open inquiry? I need another blog entry I think.

Sunday, November 20, 2016

Morality solves collective action problems (rough notes)

Society defies conscious control. The state rules more by influence than by force. If we understand this process we may find a way to improve it. Personal morality stands at the center.

Each of us prefers that others act morally, so that we can benefit from a superior collective outcome. A society where people all know the rules and don't cheat seems much preferable to almost anyone, compared to the alternatives where either people don't agree about rules or cheat. But in a particular case, we each might benefit individually from cheating. We want to find a way to discourage cheating. Jonathan Haidt's book, The Righteous Mind examines the hypothesis that morality evolved for this purpose, based on the work of various researchers.

Game theory
Game theory takes the sociopath's perspective. It tries to objectivize value. It places the chooser under impossible cognitive load. It points to Moloch. Most of these are criticisms, but game theory may give a few insights about collective action problems. I'm not sure I agree with any of the standard analyses, but I am not quite ready to just toss it out. Maybe it doesn't really help.

Evolution? 
Did evolution make us judgemental and vindictive? Do our judgmental and vindictive instincts help us keep people from cheating? Didn't evolution also make us empathetic, sympathetic, and forgiving sometimes? Are we perfectly suited to our evolutionary environment? Does the behavior appropriate to that environment still work in our current environment? Is it clear that the old way was the best way, even in the old environment? Evolution can't tell us what we ought to do or ought to want. It has made some things easier than others, though. We can use things we learn from it.

Reputation, motivation, conscience, empathy, integrity, purpose, inspiration, flow
Where does each succeed in helping us prevent cheating and where does it fail? How can we change our situation to help it succeed? What will strengthen them?

What sort of influence would unambiguously improve this situation? I used to advocate all sorts of unpopular political ideas (and I have not entirely repudiated them, at least in the sense of accepting more popular ones instead), should I try to convince everyone to agree with me? 

This approach succeeds only if I am correct. A large scale complicated society needs more fault tolerance than that. It cannot depend on the accuracy of a single idea, it can benefit from hedging its bets. (Am I promoting this pluralist meta-idea to the same sort of critical status? Can I justify this?) This has the advantage of accepting the current situation as it is, where although people aspire to unity, they never truly achieve it. E pluribus unum, or e unum pluribus?

What about the rules themselves? How do we justify the rules we have? Can our understanding of the rules change? What process will help us discover better rules, or better interpretations of our current rules? How do we learn?

I'm tempted to think we don't need to learn about morality, since people have been thinking about it so long. But we continue to apply it to new circumstances, however ineptly, so learning can help.

We have opportunities to benefit within the existing rules, either by following or cheating. But we also have opportunities for improving the rules, or at least, improving our understanding and interpretation of them. 

Does any of this help me to step outside of the context and view morality from a different perspective? How can a person learn about this? How can a society learn about this and improve?

A naive sociopath would ignore the rules and other persons' feelings and rights. For them it is just a matter of don't get caught. Risk of punishment is just a cost to them and life is a cost-benefit analysis. This might lead them to become more sophisticated, to try to hide within the system, to understand it and exploit its weaknesses. Does a sociopath prefer a system where it is easy to cheat, where they have lots of competition but not much danger of getting caught, or a system where it is difficult to cheat, so they have more risk of detection but less competition?

A non-sociopath might look for ways to improve the arrangement, to protect persons from sociopaths, to improve the game while playing it.

Have I lost the insight that made me want to write this? Everyone knows the temptation to cheat. What new implications can I find?

According to Haidt, we often use moral reasoning to rationalize what we have done, which we may have done without serious thought, just because we followed our impulse.

Propaganda and political campaigns use moral reasoning to rouse their followers, to get them charged up. Viewed cynically, this looks like an attempt to control them, or at least influence them. But a sincere and honest person may also wish to give inspiration to others. Does intent make the difference? Or are there techniques of persuasion that violate reasonable ethics? What separates the inspirational teacher from the demagogue?



I apologize for meandering.