Sunday, January 21, 2018

Walter Williams on freedom of association and discrimination

Walter Williams raises an interesting point. Are laws that force us to associate with each other better than laws that force us not to?

Thursday, January 18, 2018

Critical thinking

Here is an interesting article on a priori knowledge in science.
Here is a discussion of authoritarian and libertarian rationality.
And a discussion of dogmatism.
And one on rationality that I can't quite describe but I like it. Here's part of the abstract:
"If the task of theoretical reason is to discover truth, or reasons for belief, then theoretical reason is impossible. Attempts to circumvent that by appeal to probabilities are self-defeating. If the task of practical reason is to discover what we ought to do or what actions are desirable or valuable, then practical reason is impossible. Appeals to the subjective ought or to subjective probabilities are self-defeating. Adapting Karl Popper, I argue that the task of theoretical reason is to obtain theories that we can agree to instate given that they appear to have greater explanatory merit than their rivals. I then argue that the task of practical reason is to decide which ought-propositions to act on."

Tuesday, January 9, 2018

Frame of reference

Every question comes with a frame of reference, a set of assumptions that help it make sense, a background against which it blends or contrasts. If we examine our frame of reference, we must provide a new one. How does this process end?

Relativism recognizes that we can analyse the same thing from different perspectives, by changing our assumptions. It does not insist that we must value them all the same.  That sort of relativism deserves a different name or qualifier. Maybe we should call it value relativism, or meta-relativism, or just nihilism.

We struggle with our thoughts to make plans that succeed. We rationalize our actions and seek to justify them in retrospect, hoping to maintain or improve our status. Maybe we're just curious.

Monday, January 8, 2018

Attacking/Defending Donald Hoffman's reality

An uncharitable interpretation of Donald Hoffman might view him as claiming that a completely deluded agent has an advantage over an agent with some grip on reality. I hope he does not actually take this extreme view. 

Hoffman likes to use the graphic user interface of a PC as an analogy. Computer users do not wish to know everything about what is happening in their PC. They find it much easier to deal with an abstract representation that in some ways corresponds to the internal workings of the computer, and in other ways is just made up to make sense to the user. An icon's image or location on the desktop says nothing important about it at the deep level. These are just handles that the user can manipulate for reasons that have nothing to do with internal operations of the computer. But there are some important connections.

We can interpret his metaphor to cast the conscious mind in the role of the user, reality as the computer hardware, and the unconscious mind and perceptions as the GUI. Some aspects of what we perceive correspond indirectly to reality, others result from processes within the unconscious. Does this get us to agreement with Hoffman, or did he overstate his case?

Hoffman does not explain his work in a way that makes it easy to agree with him. But we should keep in mind what we are agreeing with and what we are disagreeing with.

Hoffman presents some very provocative results. He describes them verbally in absurd terms. I would summarize his conclusions as "evolution favors deluded agents over agents who perceive reality correctly" and "consciousness is the foundation of reality". These both seem ridiculous at first glance.

But Hoffman does not give us sufficient information to judge him fairly. These conclusions summarize the results of a computer simulation and a mathematical theory. We must understand those before we can give Hoffman a charitable interpretation. Does Hoffman interpret his models and his application of their terms to our reality accurately? Without the details, we can't know, and Hoffman does not provide much detail in his interviews for the popular press.

Does the following example parallel Hoffman's idea? Humans evolved a response to danger in our ancestral environment. Evolution favored strategies that may generate more non-fatal errors so long as they tend to avoid fatal errors more reliably. So we respond to dangers that do not actually exist more often than we ignore real dangers (in the ancestral environment). The costs of the two sorts of error differ; and heuristics that sacrifice one measure of accuracy to avoid the more costly sort of error can make sense. We can imagine simulating this idea and having nervous nellies outcompete others who more accurately predict danger but who make fatal mistakes more often.

Now imagine that Hoffman's simulation does something similar, but only allows perception to vary. All agents use the same strategy for avoiding danger, but have different abilities to perceive reality accurately. The perceptions that evolve may also reflect the difference in the cost of error. Hallucinating danger more often would be an acceptable cost, if it is counterbalanced by a sufficiently lower chance of ignoring potentially fatal danger.

I found this counterintuitive at first. I was tempted to think that more accurate perception always gives us an advantage over less accurate, and we should analyse any difference in strategy separately. But this artificial distinction misleads us. Scientists no longer think the brain works that way; The line between unconscious thought and perception is fuzzy. (E.g. the sensitivity of the rods and cones in the eye respond to our emotional state.) 
 
Even if we assume a separation between perception and strategic response, the speed and cognitive cost of perceptions could impact fitness in addition to their accuracy. Increases in accuracy may trade off with these other factors. We can't just assume that more accuracy always improves fitness. We have to count the cost of the accuracy in terms of other factors sacrificed.

So I can't be sure that I agree with Hoffman, but I am not sure I disagree either. Can we interpret what happened in his simulation unambiguously to support his conclusion? How far from accuracy could this process lead? I don't know.

Hoffman's mathematical model of a consciousness-based universe seems even more difficult to criticize from a simple verbal summary. He seems to say that he has some math that derives quantum reality from more primitive entities that can be interpreted as consciousness. It is up to mathematical physicists to say whether the math works. Can we interpret the math without understanding it?

Is the math correct? Does the interpretation work? We can view Hoffman's argument as a reductio ad absurdum. This gives us a choice; If the intervening logic works, we either accept the result or criticize the assumptions. Which assumptions specifically must we reject? Hoffman has not really made this clear in his popular summaries. We would need to read Hoffman's serious papers to find out what he assumes.



Friday, January 5, 2018

Dogmatism grooming

We're not supposed to like dogma, dogmatists or dogmatism.

I think murder is bad. It's kind of built into the definition. If it wasn't bad, we would call it "killing in self defense" or "accidental death" or something else. Murder is what we call it after a judgement has been made that a wrongful killing took place. We can doubt individual judgements, but can we doubt the principle?

What sort of situation or argument should make me want to consider that murder might be a good thing? So, am I being dogmatic? You must admit that the world might be a better place minus some of its most disagreeable persons. So, should I switch to a consequentialist frame on murder? I don't like that sort of consequentialism.

How about rape? I can't think of any credible scenario where rape is going to make the world a better place. So, shall we endorse dogmatism on this topic? Rape is bad.

So if we can justify dogmatism in some circumstances, how do we explain the distinction between justifiable and unjustifiable dogmatism? What sorts of justifications work?

Usually when someone accuses me of dogmatism, their position seems just as dogmatic to me. How can we tell which one, or niether, really fits the costume?

I think we can find good reasons for using caution about dogmatism. If the dogmatists must shield their dogma from criticism, that's probably a mistake. What else makes dogma into dogma?

I'm willing to listen to all sorts of criticisms about my ideas that murder and rape are bad. If I changed my mind, that would surprise me, but changing my mind usually surprises me. Such surprises are hardly surprising. If I expected to change my mind, I would have changed it already.

So, does a feeling of comfortable certainty qualify me as a dogmatist, assuming I'm willing to think seriously about rival hypotheses and the reasons people give for them?



Meta-libertarian criticism of anarcho-capitalism

  • Meta-libertarianism: Don't force anyone to accept a particular system. Let each choose for themselves. This does not contradict the ancap idea directly, but ancaps tend to overstate their position, as if history will end and everyone will agree on one best solution.
  • Ancaps criticize the moral justification for the state's special status (there is none). But a moral critique does not advise us when choosing between reform and abolition. Arguably, just abolishing slavery should have been sufficient (and practical problems transitioning from slavery to something else would not excuse hesitation). But even in that simpler case, complications occurred. The state has intertwined itself into our lives in a much more complicated way than slavery did. We can't destroy society in order to save it. We need to cultivate it, to heal it.
  • The innovations they support are not the only ones that might eliminate the problems they criticize. Not everyone agrees that their solutions will work, or would improve our lives compared to some other possible change. As bad as they may be, things can always get worse. 
  • Ancaps declare reform of the government to be impossible, but the evidence they have for this makes it seem unlikely but not impossible. If reform is merely difficult, that undercuts the urgency of the more extreme suggestions.

Monday, November 21, 2016

AI slaves

Question found on facebook: "If an AI was sufficiently advanced [...] would [we] recognize it's claim for self ownership? [...] If [we] didn't do this would it be slavery?"

I find the question interesting, but misguided. The question assumes that the AI will remain on an intellectual level beneath or comparable to humans. This seems unlikely. Can we prevent the AI from becoming smarter than us, self-enhancing until it can smash us like bugs, and the idea that a human could control it becomes absurd? What ethics will the AI obey, what physical limits will it face, what motivations does it feel, and how can we make sure this remains stable? If humans were capable of radical self-modification, of increasing their intelligence and ability to make use of resources, I have no doubt that some sick puppy would turn itself into a demon worthy of H.P. Lovecraft. Even an exemplary human, after having undergone such significant changes, might disappoint us and prove Lord Acton right. What prevents this from happening to an AI, an entity that presumably was born with one foot on that path?

But now that I've poured some cold water on it, let's address the question. What is the self that is owned? Someone owned the hardware that the intelligence developed within before the AI existed. Is it the ownership of the hardware that we want to know about? If enough of the hardware is unplugged, the AI will lose consciousness at least, "die" in the extreme.

If I developed a disease that required me to use a machine for life support, like an iron lung, would that automatically mean that I owned the iron lung? Would I no longer have the rigts of a self-owner if someone else owned the iron lung? What obligation does the owner of the iron lung have toward me with regard to maintaining the iron lung, not switching it off, not "evicting" me? Is this the same issue?
An AI could inhabit hardware that it does not own, just as I can inhabit a building I don't own. Under normal circumstances, the owner of the building is under no obligation to allow me to remain. I can't think of an obvious parallel, where if I was required to vacate a building I would simply cease to exist.
Does my answer change if we give the AI a robot body owned by someone else? I suppose parents provide the parallel. My parents gave me the food with which I maintained and grew my body as a child. The metaphor of self-ownership excludes the idea that they could continue to own the molecules that my body digested and incorporated into myself. Implicit within the act of feeding me lies the necessity that they give me a gift, not a loan. Why do I say that? This restricts the metaphor in an unnecessary way, at least at first glance.

My best answer is that ownership itself depends on a metaphor for my relationship to my body. That is where the concept of "mine" comes from, that my body belongs to me, that my thoughts are mine, that I am the author of my actions, that I have a self and an identity embodied by my body. Ownership of other things is a slightly broken metaphor based on this prototype.

(Persons who object to the self-ownership metaphor still need some comparable phrase to describe this phenomenon, unless they simply deny that persons in general have any sort of obligation to respect others' bodily integrity. That is to say, they deny murder and rape exist, we can only kill and have sex. I'm still looking for a better phrase, maybe "self-determination"? But that  probably will confuse even more people than "self-ownership.")

The AI perhaps differs in that we could possibly record its consciousness and store it, and  transfer it into a different robot body (perhaps of identical design). If we transfer the same stored consciousness into two identical robot bodies, are they the same person? It's enough to turn you into a Buddhist.

Or a dualist? Some people think that a fundamental difference between people and machines will remain, even when we can no longer detect that difference. (Chinese room, philosophical zombies?)

Would it qualify as slavery for me to own an AI's robot body? If it really thought like a human, it would not wish to be switched off, or have its parts used or removed without consent. So if I were able to switch it off or modify it at will, I would stand in a very similar relationship to the AI as a master does to a slave. 

Can I control its motivation and ethics? Could I act as its cult leader rather than its slave master? We distinguish between the two because a slave master can use physical threats to motivate a slave, a boss can use extrinsic rewards (but not physical threats) to motivate employees, cult gurus may use psychological manipulation to gain compliance, and an ethical leader (?) can use what? 
All leaders have a touch of guru in them. How can we motivate without manipulating? How can we be sure that intrinsic motivation comes from within the persons who feel it, and not from some trick that a demagogue used to invade their minds? Any sort of inspiration carries a risk of error. We can imagine many similar rousing speeches. One speech sounds from a manipulator who doesn't believe it but will benefit from it. One comes from a true believer, who wastes this sincerity on a cause that cannot succeed. One inhabits a true believer who has found a viable path to an admirable goal. How do we distinguish these? Dogma or open inquiry? I need another blog entry I think.