Monday, November 21, 2016

AI slaves

Question found on facebook: "If an AI was sufficiently advanced [...] would [we] recognize it's claim for self ownership? [...] If [we] didn't do this would it be slavery?"

I find the question interesting, but misguided. The question assumes that the AI will remain on an intellectual level beneath or comparable to humans. This seems unlikely. Can we prevent the AI from becoming smarter than us, self-enhancing until it can smash us like bugs, and the idea that a human could control it becomes absurd? What ethics will the AI obey, what physical limits will it face, what motivations does it feel, and how can we make sure this remains stable? If humans were capable of radical self-modification, of increasing their intelligence and ability to make use of resources, I have no doubt that some sick puppy would turn itself into a demon worthy of H.P. Lovecraft. Even an exemplary human, after having undergone such significant changes, might disappoint us and prove Lord Acton right. What prevents this from happening to an AI, an entity that presumably was born with one foot on that path?

But now that I've poured some cold water on it, let's address the question. What is the self that is owned? Someone owned the hardware that the intelligence developed within before the AI existed. Is it the ownership of the hardware that we want to know about? If enough of the hardware is unplugged, the AI will lose consciousness at least, "die" in the extreme.

If I developed a disease that required me to use a machine for life support, like an iron lung, would that automatically mean that I owned the iron lung? Would I no longer have the rigts of a self-owner if someone else owned the iron lung? What obligation does the owner of the iron lung have toward me with regard to maintaining the iron lung, not switching it off, not "evicting" me? Is this the same issue?
An AI could inhabit hardware that it does not own, just as I can inhabit a building I don't own. Under normal circumstances, the owner of the building is under no obligation to allow me to remain. I can't think of an obvious parallel, where if I was required to vacate a building I would simply cease to exist.
Does my answer change if we give the AI a robot body owned by someone else? I suppose parents provide the parallel. My parents gave me the food with which I maintained and grew my body as a child. The metaphor of self-ownership excludes the idea that they could continue to own the molecules that my body digested and incorporated into myself. Implicit within the act of feeding me lies the necessity that they give me a gift, not a loan. Why do I say that? This restricts the metaphor in an unnecessary way, at least at first glance.

My best answer is that ownership itself depends on a metaphor for my relationship to my body. That is where the concept of "mine" comes from, that my body belongs to me, that my thoughts are mine, that I am the author of my actions, that I have a self and an identity embodied by my body. Ownership of other things is a slightly broken metaphor based on this prototype.

(Persons who object to the self-ownership metaphor still need some comparable phrase to describe this phenomenon, unless they simply deny that persons in general have any sort of obligation to respect others' bodily integrity. That is to say, they deny murder and rape exist, we can only kill and have sex. I'm still looking for a better phrase, maybe "self-determination"? But that  probably will confuse even more people than "self-ownership.")

The AI perhaps differs in that we could possibly record its consciousness and store it, and  transfer it into a different robot body (perhaps of identical design). If we transfer the same stored consciousness into two identical robot bodies, are they the same person? It's enough to turn you into a Buddhist.

Or a dualist? Some people think that a fundamental difference between people and machines will remain, even when we can no longer detect that difference. (Chinese room, philosophical zombies?)

Would it qualify as slavery for me to own an AI's robot body? If it really thought like a human, it would not wish to be switched off, or have its parts used or removed without consent. So if I were able to switch it off or modify it at will, I would stand in a very similar relationship to the AI as a master does to a slave. 

Can I control its motivation and ethics? Could I act as its cult leader rather than its slave master? We distinguish between the two because a slave master can use physical threats to motivate a slave, a boss can use extrinsic rewards (but not physical threats) to motivate employees, cult gurus may use psychological manipulation to gain compliance, and an ethical leader (?) can use what? 
All leaders have a touch of guru in them. How can we motivate without manipulating? How can we be sure that intrinsic motivation comes from within the persons who feel it, and not from some trick that a demagogue used to invade their minds? Any sort of inspiration carries a risk of error. We can imagine many similar rousing speeches. One speech sounds from a manipulator who doesn't believe it but will benefit from it. One comes from a true believer, who wastes this sincerity on a cause that cannot succeed. One inhabits a true believer who has found a viable path to an admirable goal. How do we distinguish these? Dogma or open inquiry? I need another blog entry I think.

No comments: