Most Popular Theories of Consciousness Are Worse Than Wrong

If you bundle enough information into a computer, creating a big enough connected mass of data, it’ll wake up and start to act conscious, like Skynet.

.. And yet it doesn’t actually explain anything. What exactly is the mechanism that leads from integrated information in the brain to a person who ups and claims, “Hey, I have a conscious experience of all that integrated information!” There isn’t one.

.. This type of thinking leads straight to a mystical theory called panpsychism, the claim that everything in the universe is conscious, each in its own way, since everything contains at least some information. Rocks, trees, rivers, stars.

.. Consciousness has a specific, practical impact on brain function. If you want to understand how the brain works, you need to understand that part of the machine.

.. The explanation is sound enough that in principle, one could build the machine. Give it fifty years, and I think we’ll get there. Computer scientists already know how to construct a computing device that takes in information, that constructs models or simulations, and that draws on those simulations to arrive at conclusions and guide behavior.

AI Visionary Eliezer Yudkowsky on the Singularity, Bayesian Brains and Closet Goblins

His writings (such as this essay, which helped me grok, or gave me the illusion of grokking, Bayes’s Theorem) exude the arrogance of the autodidact, edges undulled by formal education, but that’s part of his charm.

.. Horgan: Are you religious in any way?

Yudkowsky: No.  When you make a mistake, you need to avoid the temptation to go defensive, try to find some way in which you were a little right, look for a silver lining in the cloud.  It’s much wiser to just say “Oops”, admit you were not even a little right, swallow the whole bitter pill in one gulp, and get on with your life.  That’s the attitude humanity should take toward religion.

..  I imagine trying to get the world to a condition where some unemployed person can offer to drive you to work for 20 minutes, be paid five dollars, and then nothing else bad happens to them.  They don’t have their unemployment insurance phased out, have to register for a business license, lose their Medicare, be audited, have their lawyer certify compliance with OSHA rules, or whatever.  They just have an added $5.

I’d try to get to the point where employing somebody was once again as easy as it was in 1900.  I think it can make sense nowadays to have some safety nets, but I’d try to construct every safety net such that it didn’t disincent or add paperwork to that simple event where a person becomes part of the economy again.

..  If you were previously irrational in multiple ways that balanced or canceled out, then becoming half-rational can leave you worse off than before.

.. Horgan: How does your vision of the Singularity differ from that of Ray Kurzweil?

Yudkowsky:

– I don’t think you can time AI with Moore’s Law.  AI is a software problem.

.. “An earthquake in California causing a flood that causes over a thousand deaths” than another group assigned to “A flood causing over a thousand deaths somewhere in North America.”  Even though adding on additional details necessarily makes a story less probable, it can make the story sound more plausible.

..  “I want to live one more day.  Tomorrow I’ll still want to live one more day.  Therefore I want to live forever, proof by induction on the positive integers.”

.. if you had solved the general problem of pinpointing an AI’s utility functions at things that seem deceptively straightforward to human intuitions, and you’d solved an even harder problem of building an AI using the particular sort of architecture where ‘being horny’ or ‘sex makes me happy’ makes sense in the first place, then you could perhaps make an AI that had been told to look at humans, model what humans want, pick out the part of the model that was sexual desire, and then want and experience that thing too.

You could also, if you had a sufficiently good understanding of organic biology and aerodynamics, build an airplane that could mate with birds.

.. The fatal scenario is an AI that neither loves you nor hates you, because you’re still made of atoms that it can use for something else.  Game theory, and issues like cooperation in the Prisoner’s Dilemma, don’t emerge in all possible cases.  In particular, they don’t emerge when something is sufficiently more powerful than you that it can disassemble you for spare atoms whether you try to press Cooperate or Defect.

 

Machine Learning: Why is overfitting bad?

This is basically how I explained it to my 6 year old.

Once there was a girl named Mel (“Get it? ML?” “Dad, you’re lame.”). And every day Mel played with a different friend, and every day she played it was a sunny, wonderful day.

Mel played with Jordan on Monday, Lily on Tuesday, Mimi no Wednesday, Olive on Thursday .. and then on Friday Mel played with Brianna, and it rained. It was a terrible thunderstorm!

More days, more friends! Mel played with Kwan on Saturday, Grayson on Sunday, Asa on Monday … and then on Tuesday Mel played with Brooke and it rained again, even worse than before!

Now Mel’s mom made all the playdates, so that night during dinner she starts telling Mel all about the new playdates she has lined up. “Luis on Wednesday, Ryan on Thursday, Jemini on Friday, Bianca on Saturday -”

Mel frowned.

Mel’s mom asked, “What’s the matter, Mel, don’t you like Bianca?”

Mel replied, “Oh, sure, she’s great, but every time I play with a friend whose name starts with B, it rains!”


What’s wrong with Mel’s answer?

Well, it might not rain on Saturday.

Well, I don’t know, I mean, Brianna came and it rained, Brooke came and it rained …

Yeah, I know, but rain doesn’t depend on your friends.