Eric Veach: Google

Eric Veach is a Canadian computer scientist, and who won a technical Academy Award.[1][2]

He won his 2014 academy award for work in colour perception, as applied to computer graphics, described in his 1997 PhD thesis.[1][3]He told CTV News he hadn’t done any work in computer graphics for 15 years. Veach had worked at Pixar, but, more recently, he had been a senior developer at Google.[2]

His PhD thesis, Robust Monte Carlo Methods for Light Transport Simulation, is highly cited.[3]

In 2008, the University of Waterloo, the institution where he earned his Bachelor of Mathematics, in 1990, awarded him a J. W. Graham Medal, an annual award granted to a distinguished alumnus who had studied computer science there.[2] His PhD is from Stanford University.

Veach is a strong believer in environmental causes and served as the vice-chair of the Rainforest Trust.[4]

Farhad Manjoo named Veach and two of his non-American colleagues, at Google, in an article entitled, “Why Silicon Valley Wouldn’t Work Without Immigrants”.[5] Manjoo’s article attempted to explain why newly inaugurated President Donald Trump‘s attempts to squeeze off the flow of immigrants to the USA was dangerous. He argued that America disproportionately benefitted from allowing big brained foreigners like Veach to find work.

The Age of Entanglement

Why humans should think about technology the way field biologists examine the living world

Hundreds of thousands of air travelers were delayed by a major, system-wide network outage at Delta on Monday morning, a problem that’s becoming increasingly common in a world run by interconnected and aging computer systems.

.. Such large-scale technological failures aren’t just massively inconvenient, they’re potentially dangerous, especially as machines increasingly handle crucial operations across a variety of industries. Complex systems are redefining the ways in which humans think about and interact with technology, a dramatic shift in perspective that poses its own risks. That’s the argument at the heart of Samuel Arbesmn’s new book, Overcomplicated.

.. Arbesman is not saying you need to dismantle your iPhone and build it from scratch, or only use apps that you created yourself. (Although, hey, if that’s your thing, great.) But he is saying that active curiosity—and a certain degree of futzing with the technological systems we encounter—is culturally overdue.

Is AlphaGo Really Such a Big Deal?

Will the technical advances that led to AlphaGo’s success have broader implications? To answer this question, we must first understand the ways in which the advances that led to AlphaGo are qualitatively different and more important than those that led to Deep Blue.

.. In chess, beginning players are taught a notion of a chess piece’s value. In one system, a knight or bishop is worth three pawns.

.. The notion of value is crucial in computer chess.

.. Top Go players use a lot of intuition in judging how good a particular board position is. They will, for instance, make vague-sounding statements about a board position having “good shape.” And it’s not immediately clear how to express this intuition in simple, well-defined systems like the valuation of chess pieces.

.. What’s new and important about AlphaGo is that its developers have figured out a way of bottling something very like that intuitive sense.

.. AlphaGo took 150,000 games played by good human players and used an artificial neural network to find patterns in those games. In particular, it learned to predict with high probability what move a human player would take in any given position. AlphaGo’s designers then improved the neural network by repeatedly playing it against earlier versions of itself, adjusting the network so it gradually improved its chance of winning.

.. AlphaGo created a policy network through billions of tiny adjustments, each intended to make just a tiny incremental improvement. That, in turn, helped AlphaGo build a valuation system that captures something very similar to a good Go player’s intuition about the value of different board positions.

.. I see AlphaGo not as a revolutionary breakthrough in itself, but rather as the leading edge of an extremely important development: the ability to build systems that can capture intuition and learn to recognize patterns. Computer scientists have attempted to do this for decades, without making much progress. But now, the success of neural networks has the potential to greatly expand the range of problems we can use computers to attack.

Most Popular Theories of Consciousness Are Worse Than Wrong

If you bundle enough information into a computer, creating a big enough connected mass of data, it’ll wake up and start to act conscious, like Skynet.

.. And yet it doesn’t actually explain anything. What exactly is the mechanism that leads from integrated information in the brain to a person who ups and claims, “Hey, I have a conscious experience of all that integrated information!” There isn’t one.

.. This type of thinking leads straight to a mystical theory called panpsychism, the claim that everything in the universe is conscious, each in its own way, since everything contains at least some information. Rocks, trees, rivers, stars.

.. Consciousness has a specific, practical impact on brain function. If you want to understand how the brain works, you need to understand that part of the machine.

.. The explanation is sound enough that in principle, one could build the machine. Give it fifty years, and I think we’ll get there. Computer scientists already know how to construct a computing device that takes in information, that constructs models or simulations, and that draws on those simulations to arrive at conclusions and guide behavior.