Is AlphaGo Really Such a Big Deal?

Will the technical advances that led to AlphaGo’s success have broader implications? To answer this question, we must first understand the ways in which the advances that led to AlphaGo are qualitatively different and more important than those that led to Deep Blue.

.. In chess, beginning players are taught a notion of a chess piece’s value. In one system, a knight or bishop is worth three pawns.

.. The notion of value is crucial in computer chess.

.. Top Go players use a lot of intuition in judging how good a particular board position is. They will, for instance, make vague-sounding statements about a board position having “good shape.” And it’s not immediately clear how to express this intuition in simple, well-defined systems like the valuation of chess pieces.

.. What’s new and important about AlphaGo is that its developers have figured out a way of bottling something very like that intuitive sense.

.. AlphaGo took 150,000 games played by good human players and used an artificial neural network to find patterns in those games. In particular, it learned to predict with high probability what move a human player would take in any given position. AlphaGo’s designers then improved the neural network by repeatedly playing it against earlier versions of itself, adjusting the network so it gradually improved its chance of winning.

.. AlphaGo created a policy network through billions of tiny adjustments, each intended to make just a tiny incremental improvement. That, in turn, helped AlphaGo build a valuation system that captures something very similar to a good Go player’s intuition about the value of different board positions.

.. I see AlphaGo not as a revolutionary breakthrough in itself, but rather as the leading edge of an extremely important development: the ability to build systems that can capture intuition and learn to recognize patterns. Computer scientists have attempted to do this for decades, without making much progress. But now, the success of neural networks has the potential to greatly expand the range of problems we can use computers to attack.

Most Popular Theories of Consciousness Are Worse Than Wrong

If you bundle enough information into a computer, creating a big enough connected mass of data, it’ll wake up and start to act conscious, like Skynet.

.. And yet it doesn’t actually explain anything. What exactly is the mechanism that leads from integrated information in the brain to a person who ups and claims, “Hey, I have a conscious experience of all that integrated information!” There isn’t one.

.. This type of thinking leads straight to a mystical theory called panpsychism, the claim that everything in the universe is conscious, each in its own way, since everything contains at least some information. Rocks, trees, rivers, stars.

.. Consciousness has a specific, practical impact on brain function. If you want to understand how the brain works, you need to understand that part of the machine.

.. The explanation is sound enough that in principle, one could build the machine. Give it fifty years, and I think we’ll get there. Computer scientists already know how to construct a computing device that takes in information, that constructs models or simulations, and that draws on those simulations to arrive at conclusions and guide behavior.

Machine Learning: Why is overfitting bad?

This is basically how I explained it to my 6 year old.

Once there was a girl named Mel (“Get it? ML?” “Dad, you’re lame.”). And every day Mel played with a different friend, and every day she played it was a sunny, wonderful day.

Mel played with Jordan on Monday, Lily on Tuesday, Mimi no Wednesday, Olive on Thursday .. and then on Friday Mel played with Brianna, and it rained. It was a terrible thunderstorm!

More days, more friends! Mel played with Kwan on Saturday, Grayson on Sunday, Asa on Monday … and then on Tuesday Mel played with Brooke and it rained again, even worse than before!

Now Mel’s mom made all the playdates, so that night during dinner she starts telling Mel all about the new playdates she has lined up. “Luis on Wednesday, Ryan on Thursday, Jemini on Friday, Bianca on Saturday -”

Mel frowned.

Mel’s mom asked, “What’s the matter, Mel, don’t you like Bianca?”

Mel replied, “Oh, sure, she’s great, but every time I play with a friend whose name starts with B, it rains!”


What’s wrong with Mel’s answer?

Well, it might not rain on Saturday.

Well, I don’t know, I mean, Brianna came and it rained, Brooke came and it rained …

Yeah, I know, but rain doesn’t depend on your friends.

The Man Who Tried to Redeem the World with Logic

As Pitts began his work at MIT, he realized that although genetics must encode for gross neural features, there was no way our genes could pre-determine the trillions of synaptic connections in the brain—the amount of information it would require was untenable. It must be the case, he figured, that we all start out with essentially random neural networks—highly probable states containing negligible information (a thesis that continues to be debated to the present day). He suspected that by altering the thresholds of neurons over time, randomness could give way to order and information could emerge.