Tuesday, January 9, 2018

Frame of reference

Every question comes with a frame of reference, a set of assumptions that help it make sense, a background against which it blends or contrasts. If we examine our frame of reference, we must provide a new one. How does this process end?

Relativism recognizes that we can analyse the same thing from different perspectives, by changing our assumptions. It does not insist that we must value them all the same.  That sort of relativism deserves a different name or qualifier. Maybe we should call it value relativism, or meta-relativism, or just nihilism.

We struggle with our thoughts to make plans that succeed. We rationalize our actions and seek to justify them in retrospect, hoping to maintain or improve our status. Maybe we're just curious.

Monday, January 8, 2018

Attacking/Defending Donald Hoffman's reality

An uncharitable interpretation of Donald Hoffman might view him as claiming that a completely deluded agent has an advantage over an agent with some grip on reality. I hope he does not actually take this extreme view. 

Hoffman likes to use the graphic user interface of a PC as an analogy. Computer users do not wish to know everything about what is happening in their PC. They find it much easier to deal with an abstract representation that in some ways corresponds to the internal workings of the computer, and in other ways is just made up to make sense to the user. An icon's image or location on the desktop says nothing important about it at the deep level. These are just handles that the user can manipulate for reasons that have nothing to do with internal operations of the computer. But there are some important connections.

We can interpret his metaphor to cast the conscious mind in the role of the user, reality as the computer hardware, and the unconscious mind and perceptions as the GUI. Some aspects of what we perceive correspond indirectly to reality, others result from processes within the unconscious. Does this get us to agreement with Hoffman, or did he overstate his case?

Hoffman does not explain his work in a way that makes it easy to agree with him. But we should keep in mind what we are agreeing with and what we are disagreeing with.

Hoffman presents some very provocative results. He describes them verbally in absurd terms. I would summarize his conclusions as "evolution favors deluded agents over agents who perceive reality correctly" and "consciousness is the foundation of reality". These both seem ridiculous at first glance.

But Hoffman does not give us sufficient information to judge him fairly. These conclusions summarize the results of a computer simulation and a mathematical theory. We must understand those before we can give Hoffman a charitable interpretation. Does Hoffman interpret his models and his application of their terms to our reality accurately? Without the details, we can't know, and Hoffman does not provide much detail in his interviews for the popular press.

Does the following example parallel Hoffman's idea? Humans evolved a response to danger in our ancestral environment. Evolution favored strategies that may generate more non-fatal errors so long as they tend to avoid fatal errors more reliably. So we respond to dangers that do not actually exist more often than we ignore real dangers (in the ancestral environment). The costs of the two sorts of error differ; and heuristics that sacrifice one measure of accuracy to avoid the more costly sort of error can make sense. We can imagine simulating this idea and having nervous nellies outcompete others who more accurately predict danger but who make fatal mistakes more often.

Now imagine that Hoffman's simulation does something similar, but only allows perception to vary. All agents use the same strategy for avoiding danger, but have different abilities to perceive reality accurately. The perceptions that evolve may also reflect the difference in the cost of error. Hallucinating danger more often would be an acceptable cost, if it is counterbalanced by a sufficiently lower chance of ignoring potentially fatal danger.

I found this counterintuitive at first. I was tempted to think that more accurate perception always gives us an advantage over less accurate, and we should analyse any difference in strategy separately. But this artificial distinction misleads us. Scientists no longer think the brain works that way; The line between unconscious thought and perception is fuzzy. (E.g. the sensitivity of the rods and cones in the eye respond to our emotional state.) 
Even if we assume a separation between perception and strategic response, the speed and cognitive cost of perceptions could impact fitness in addition to their accuracy. Increases in accuracy may trade off with these other factors. We can't just assume that more accuracy always improves fitness. We have to count the cost of the accuracy in terms of other factors sacrificed.

So I can't be sure that I agree with Hoffman, but I am not sure I disagree either. Can we interpret what happened in his simulation unambiguously to support his conclusion? How far from accuracy could this process lead? I don't know.

Hoffman's mathematical model of a consciousness-based universe seems even more difficult to criticize from a simple verbal summary. He seems to say that he has some math that derives quantum reality from more primitive entities that can be interpreted as consciousness. It is up to mathematical physicists to say whether the math works. Can we interpret the math without understanding it?

Is the math correct? Does the interpretation work? We can view Hoffman's argument as a reductio ad absurdum. This gives us a choice; If the intervening logic works, we either accept the result or criticize the assumptions. Which assumptions specifically must we reject? Hoffman has not really made this clear in his popular summaries. We would need to read Hoffman's serious papers to find out what he assumes.

Friday, January 5, 2018

Dogmatism grooming

We're not supposed to like dogma, dogmatists or dogmatism.

I think murder is bad. It's kind of built into the definition. If it wasn't bad, we would call it "killing in self defense" or "accidental death" or something else. Murder is what we call it after a judgement has been made that a wrongful killing took place. We can doubt individual judgements, but can we doubt the principle?

What sort of situation or argument should make me want to consider that murder might be a good thing? So, am I being dogmatic? You must admit that the world might be a better place minus some of its most disagreeable persons. So, should I switch to a consequentialist frame on murder? I don't like that sort of consequentialism.

How about rape? I can't think of any credible scenario where rape is going to make the world a better place. So, shall we endorse dogmatism on this topic? Rape is bad.

So if we can justify dogmatism in some circumstances, how do we explain the distinction between justifiable and unjustifiable dogmatism? What sorts of justifications work?

Usually when someone accuses me of dogmatism, their position seems just as dogmatic to me. How can we tell which one, or niether, really fits the costume?

I think we can find good reasons for using caution about dogmatism. If the dogmatists must shield their dogma from criticism, that's probably a mistake. What else makes dogma into dogma?

I'm willing to listen to all sorts of criticisms about my ideas that murder and rape are bad. If I changed my mind, that would surprise me, but changing my mind usually surprises me. Such surprises are hardly surprising. If I expected to change my mind, I would have changed it already.

So, does a feeling of comfortable certainty qualify me as a dogmatist, assuming I'm willing to think seriously about rival hypotheses and the reasons people give for them?

Meta-libertarian criticism of anarcho-capitalism

  • Meta-libertarianism: Don't force anyone to accept a particular system. Let each choose for themselves. This does not contradict the ancap idea directly, but ancaps tend to overstate their position, as if history will end and everyone will agree on one best solution.
  • Ancaps criticize the moral justification for the state's special status (there is none). But a moral critique does not advise us when choosing between reform and abolition. Arguably, just abolishing slavery should have been sufficient (and practical problems transitioning from slavery to something else would not excuse hesitation). But even in that simpler case, complications occurred. The state has intertwined itself into our lives in a much more complicated way than slavery did. We can't destroy society in order to save it. We need to cultivate it, to heal it.
  • The innovations they support are not the only ones that might eliminate the problems they criticize. Not everyone agrees that their solutions will work, or would improve our lives compared to some other possible change. As bad as they may be, things can always get worse. 
  • Ancaps declare reform of the government to be impossible, but the evidence they have for this makes it seem unlikely but not impossible. If reform is merely difficult, that undercuts the urgency of the more extreme suggestions.

Monday, November 21, 2016

AI slaves

Question found on facebook: "If an AI was sufficiently advanced [...] would [we] recognize it's claim for self ownership? [...] If [we] didn't do this would it be slavery?"

I find the question interesting, but misguided. The question assumes that the AI will remain on an intellectual level beneath or comparable to humans. This seems unlikely. Can we prevent the AI from becoming smarter than us, self-enhancing until it can smash us like bugs, and the idea that a human could control it becomes absurd? What ethics will the AI obey, what physical limits will it face, what motivations does it feel, and how can we make sure this remains stable? If humans were capable of radical self-modification, of increasing their intelligence and ability to make use of resources, I have no doubt that some sick puppy would turn itself into a demon worthy of H.P. Lovecraft. Even an exemplary human, after having undergone such significant changes, might disappoint us and prove Lord Acton right. What prevents this from happening to an AI, an entity that presumably was born with one foot on that path?

But now that I've poured some cold water on it, let's address the question. What is the self that is owned? Someone owned the hardware that the intelligence developed within before the AI existed. Is it the ownership of the hardware that we want to know about? If enough of the hardware is unplugged, the AI will lose consciousness at least, "die" in the extreme.

If I developed a disease that required me to use a machine for life support, like an iron lung, would that automatically mean that I owned the iron lung? Would I no longer have the rigts of a self-owner if someone else owned the iron lung? What obligation does the owner of the iron lung have toward me with regard to maintaining the iron lung, not switching it off, not "evicting" me? Is this the same issue?
An AI could inhabit hardware that it does not own, just as I can inhabit a building I don't own. Under normal circumstances, the owner of the building is under no obligation to allow me to remain. I can't think of an obvious parallel, where if I was required to vacate a building I would simply cease to exist.
Does my answer change if we give the AI a robot body owned by someone else? I suppose parents provide the parallel. My parents gave me the food with which I maintained and grew my body as a child. The metaphor of self-ownership excludes the idea that they could continue to own the molecules that my body digested and incorporated into myself. Implicit within the act of feeding me lies the necessity that they give me a gift, not a loan. Why do I say that? This restricts the metaphor in an unnecessary way, at least at first glance.

My best answer is that ownership itself depends on a metaphor for my relationship to my body. That is where the concept of "mine" comes from, that my body belongs to me, that my thoughts are mine, that I am the author of my actions, that I have a self and an identity embodied by my body. Ownership of other things is a slightly broken metaphor based on this prototype.

(Persons who object to the self-ownership metaphor still need some comparable phrase to describe this phenomenon, unless they simply deny that persons in general have any sort of obligation to respect others' bodily integrity. That is to say, they deny murder and rape exist, we can only kill and have sex. I'm still looking for a better phrase, maybe "self-determination"? But that  probably will confuse even more people than "self-ownership.")

The AI perhaps differs in that we could possibly record its consciousness and store it, and  transfer it into a different robot body (perhaps of identical design). If we transfer the same stored consciousness into two identical robot bodies, are they the same person? It's enough to turn you into a Buddhist.

Or a dualist? Some people think that a fundamental difference between people and machines will remain, even when we can no longer detect that difference. (Chinese room, philosophical zombies?)

Would it qualify as slavery for me to own an AI's robot body? If it really thought like a human, it would not wish to be switched off, or have its parts used or removed without consent. So if I were able to switch it off or modify it at will, I would stand in a very similar relationship to the AI as a master does to a slave. 

Can I control its motivation and ethics? Could I act as its cult leader rather than its slave master? We distinguish between the two because a slave master can use physical threats to motivate a slave, a boss can use extrinsic rewards (but not physical threats) to motivate employees, cult gurus may use psychological manipulation to gain compliance, and an ethical leader (?) can use what? 
All leaders have a touch of guru in them. How can we motivate without manipulating? How can we be sure that intrinsic motivation comes from within the persons who feel it, and not from some trick that a demagogue used to invade their minds? Any sort of inspiration carries a risk of error. We can imagine many similar rousing speeches. One speech sounds from a manipulator who doesn't believe it but will benefit from it. One comes from a true believer, who wastes this sincerity on a cause that cannot succeed. One inhabits a true believer who has found a viable path to an admirable goal. How do we distinguish these? Dogma or open inquiry? I need another blog entry I think.

Sunday, November 20, 2016

Morality solves collective action problems (rough notes)

Society defies conscious control. The state rules more by influence than by force. If we understand this process we may find a way to improve it. Personal morality stands at the center.

Each of us prefers that others act morally, so that we can benefit from a superior collective outcome. A society where people all know the rules and don't cheat seems much preferable to almost anyone, compared to the alternatives where either people don't agree about rules or cheat. But in a particular case, we each might benefit individually from cheating. We want to find a way to discourage cheating. Jonathan Haidt's book, The Righteous Mind examines the hypothesis that morality evolved for this purpose, based on the work of various researchers.

Game theory
Game theory takes the sociopath's perspective. It tries to objectivize value. It places the chooser under impossible cognitive load. It points to Moloch. Most of these are criticisms, but game theory may give a few insights about collective action problems. I'm not sure I agree with any of the standard analyses, but I am not quite ready to just toss it out. Maybe it doesn't really help.

Did evolution make us judgemental and vindictive? Do our judgmental and vindictive instincts help us keep people from cheating? Didn't evolution also make us empathetic, sympathetic, and forgiving sometimes? Are we perfectly suited to our evolutionary environment? Does the behavior appropriate to that environment still work in our current environment? Is it clear that the old way was the best way, even in the old environment? Evolution can't tell us what we ought to do or ought to want. It has made some things easier than others, though. We can use things we learn from it.

Reputation, motivation, conscience, empathy, integrity, purpose, inspiration, flow
Where does each succeed in helping us prevent cheating and where does it fail? How can we change our situation to help it succeed? What will strengthen them?

What sort of influence would unambiguously improve this situation? I used to advocate all sorts of unpopular political ideas (and I have not entirely repudiated them, at least in the sense of accepting more popular ones instead), should I try to convince everyone to agree with me? 

This approach succeeds only if I am correct. A large scale complicated society needs more fault tolerance than that. It cannot depend on the accuracy of a single idea, it can benefit from hedging its bets. (Am I promoting this pluralist meta-idea to the same sort of critical status? Can I justify this?) This has the advantage of accepting the current situation as it is, where although people aspire to unity, they never truly achieve it. E pluribus unum, or e unum pluribus?

What about the rules themselves? How do we justify the rules we have? Can our understanding of the rules change? What process will help us discover better rules, or better interpretations of our current rules? How do we learn?

I'm tempted to think we don't need to learn about morality, since people have been thinking about it so long. But we continue to apply it to new circumstances, however ineptly, so learning can help.

We have opportunities to benefit within the existing rules, either by following or cheating. But we also have opportunities for improving the rules, or at least, improving our understanding and interpretation of them. 

Does any of this help me to step outside of the context and view morality from a different perspective? How can a person learn about this? How can a society learn about this and improve?

A naive sociopath would ignore the rules and other persons' feelings and rights. For them it is just a matter of don't get caught. Risk of punishment is just a cost to them and life is a cost-benefit analysis. This might lead them to become more sophisticated, to try to hide within the system, to understand it and exploit its weaknesses. Does a sociopath prefer a system where it is easy to cheat, where they have lots of competition but not much danger of getting caught, or a system where it is difficult to cheat, so they have more risk of detection but less competition?

A non-sociopath might look for ways to improve the arrangement, to protect persons from sociopaths, to improve the game while playing it.

Have I lost the insight that made me want to write this? Everyone knows the temptation to cheat. What new implications can I find?

According to Haidt, we often use moral reasoning to rationalize what we have done, which we may have done without serious thought, just because we followed our impulse.

Propaganda and political campaigns use moral reasoning to rouse their followers, to get them charged up. Viewed cynically, this looks like an attempt to control them, or at least influence them. But a sincere and honest person may also wish to give inspiration to others. Does intent make the difference? Or are there techniques of persuasion that violate reasonable ethics? What separates the inspirational teacher from the demagogue?

I apologize for meandering. 

Sunday, October 2, 2016

My evening with Ed Snowden

It turns out that Edward Snowden was one of the presenters at the cryptoparty I attended on 2/24/13. At least, I think so. He wasn't famous then. Hi Cap had two crypto parties, one on 12/11/12, one on 2/23/13. Snowden helped organize the first one and spoke, according to Wired Magazine.  What Wired Magazine says. This goes along with my emails from HI Capacity announcing the event and including Snowden's now famous email address, cinncinatus@lavabit.com. But I have no notes from the first event, just the second one. The announcement of the second does not include Snowden's email. So my memory is a bit fuzzy. I definitely remember a speaker that could be Snowden, a young contractor. It's possible that I went to both events and didn't take notes at the first, or that Snowden appeared at both (he didn't become famous until the following May). Or I have confabulated his presence due to wishful thinking and some coincidences.

Here's the text of the email announcing the first event:
There will be a CryptoParty December 11th at HI Capacity at 6pm. Runa S. from the Tor team will be speaking along with a few others (more speakers are welcome!).

Here are the details:

If you plan to attend (or want to speak) RSVP on this thread or with cincinnatus (.a.) lavabit.com
The cryptoparty link doesn't work any more.
Here's the second event's announcement.

What is CryptoParty? Interested parties with computers, devices, and the desire to learn to use the most basic crypto programs and the fundamental concepts of their operation! CryptoParties are free to attend, public, and are commercially and politically non-aligned. CryptoParties are absolutely against sexual harassment and discrimination.

Good evening all, we have a second event to announce! It's a Crypto Party!

For all of you cypherpunks out there, let's learn about crypto!!!

Please join us at HI Capacity on Saturday, February 23rd at 12PM!
The two invitations came from different persons.