Categories
Uncategorized

Neural Networks and New Regularities

In my first post, I argued reality is full of regularities. Exploiting these regularities requires information about them that can be conveyed and communicated. I call this information representations, of which there are five categories: rules, probablistic statements, metaphors, neural networks, and instantiations. Today I want to talk about neural networks.

Neural Networks: A Very Basic Primer

If you are already familiar with neural networks, feel free to skip this section. This is a really simplified explanation, and I’m omitting a lot of detail . If you want to learn more, the chapter on neural networks in The Master Algorithm by Pedro Domingos is a recent overview. The medium series “Machine Learning is Fun!” is an excellent primer if you want to get your hands a bit dirty.

Neural networks were originally inspired by the structure of our brains. Somehow, these arrangements of biological matter are capable of thinking, computing, and learning. What is it about our brains that makes that possible?

Our brains are a network of neurons. Neurons are complicated creatures, but three of their most important features are:

  • They are connected to each other.
  • They can send signals of varying intensity (including negative or inhibitory signals) to each other.
  • They can turn more or less “on” as a function of the incoming signals

Neural networks jump off from there, replacing actual cells with simplified virtual counterparts. In a modern neural network, the “neurons” can belong to one of three layers: input, output, and hidden.

Input layers code for “features” of something. For example, if the neural network is designed to classify black and white images, each pixel in the image could be a feature associated with an input neuron. The brighter the pixel, the more “on” the input neuron. If the neural network recognizes speech, the intensity of sound waves at a certain frequency might be a feature. More intensity at that frequency might correspond to being “on” for one of the neurons. If the neural network is designed to play Go, the presence of a white stone at each point on the game board might be a feature. The neuron is “on” if a white stone is present at the input neuron’s associated point on the game board.

Output layers are how the neural network communicates with its users. If the network is an image classifier, there could be an output neuron for every class of photo. How “on” the neuron is could convey the confidence that the image belongs to that class. If the neural network is a speech recognizer, the output neurons could be text (words or letters). If the neural network is designed to play Go, the output neurons communicate the chosen move (there may be a neuron for every space, and the one the network would like to place a stone turns on).

Hidden layers lie between the inputs and output layers, and do the “thinking” of the network. In modern neural networks there can be many hidden layers of neurons. Together, they form a complex web of connections, each with potentially positive or negative weights, and with each neuron having a potentially different threshold for activation. The operation of a neural network is about the propagation of signals forward through the network. The input neurons corresponding to the data’s features turn on and send signals to connected hidden neurons. Each of those neurons adds up the strength of the incoming signals (which can each be positive or negative), and depending on how much the sum exceeds a threshold, turn more or less on. The hidden layer neurons then send signals to the neurons connected to them. This process propagates forward until some of the output neurons are activated, communicating the “thoughts” of the neural network.

That’s it. Nothing magical is happening. The propagation of signals at each step happens according to relatively simple rules. The big picture is complicated, but only because there are so many of these small steps, and because they build on each other.

A useful thing about neural networks is that you don’t actually have to set the connections, signal strengths, and thresholds. If you have enough data, there are algorithms that can do all that automatically. Give the neural network an example set of features and see what its output neurons say. If they are wrong, propagate back from the outputs adjustments to the signals and thresholds so that the network is more likely to make the correct identification the next time it sees similar features.

Do our brains really work this way? I certainly don’t sense neurons signalling to each other and adding up thresholds. To the extent our brains work this way, the hidden layers are unconscious and we are only conscious of the output. We see a face, and we experience the thought “oh, that’s Grandma” leaping into our head unbidden. But underlying this recognition is (possibly) a biological neural network trained from birth to identify and categorize faces. The light coming into our eyes gets shunted off to various input neurons, which then propagate that information through hidden layers until the correct output neuron (or set of neurons) lights up. At that point, I have the conscious thought “Grandma!”

Neural Networks and Representations

At a more abstract level, neural networks represent local regularities in the data they are trained on with the structure of the network (including its signal strengths and each neuron’s threshold). This turns out to be a very flexible way of representing regularities. Indeed, neural networks can represent regularities that are difficult to concisely express in alternative schemes such as rules, probabilistic statement, and metaphor.

There are two sides to this coin. On the one hand, neural networks are capable of representing regularities that simply can’t be concisely represented any other way. On the other hand, the very nature of these regularities is such that they defy translation into alternative forms of representation. There is no metaphor, rule, or probability that “explains” the decision-making of the neural network (except a rule that tediously describes the neural network’s complex inner-structure).

Take this excerpt from Part IV of “Machine Learning is Fun!”, which describes how to train a neural network to identify faces:

…the neural network learns to reliably generate 128 measurements for each [face]. Any ten different pictures of the same person should give roughly the same measurements.
…So what parts of the face are these 128 numbers measuring exactly? It  turns out that we have no idea. It doesn’t really matter to us. All that we care is that the network generates nearly the same numbers when looking at two different pictures of the same person.

The tradeoff is that neural networks allow us to learn regularities that can’t easily be represented in our favored modes, but at the cost of unintelligibility.

Neural networks are also capable of innovating, in the sense of exploiting these regularities to step into the unknown and make a choice. But since the regularities they exploit are foreign to us and our very way of thinking, the innovations they come up with may seem mysterious and almost magical.

This is demonstrated really well in the netflix documentary AlphaGo.

AlphaGo and Unknown Regularities

AlphaGo’s 2016 match against Lee Sedol, as captured in the documentary AlphaGo, is a great example of how neural networks can exploit local regularities we don’t understand. AlphaGo is a program designed to play the board game “Go.” The game is relatively straightforward: two players take turns laying stones on a 19×19 grid. The goal is to enclose more territory than the opponent. However, because the board is so large, the set of possible games is enormous. There are 361 possible positions for the first stone, 360 possible positions for the second, 359 for the third, and so on. There are over 5.9 trillion possible positions of the first five stones. This enormous space of possible games has long meant that brute force calculation doesn’t work very well, even for computers. Go has been played for so long, and by so many though, that a large number of local regularities related to the game have been identified.

AlphaGo does not rely completely on neural networks, but they are a prominent component of its programming. In 2016 AlphaGo faced off against Lee Sedol, a legendary Go player considered to be the greatest of the last decade. Throughout the five game match, AlphaGo made surprising moves that baffled commentators, but later paid off. Move 37 in game 2 is a wonderful illustration:

Commentator #1: Oh, wow.
Commentator #2: Oh, it’s a totally unthinkable move.
Commentator #1: Yes.
Commentator #3: The value… that’s a very… that’s a very surprising move.
Commentator #4: (chuckling) I thought it was a mistake.
Fan: When I see this move, for me, it’s just a big shock. What? Normally, humans, we never play this one because it’s bad. It’s just bad. We don’t know why, it’s bad.

Fan (a European Go champion) makes my point very well. Humans have played Go a long time, and they have internalized certain regularities (indeed, in this case, the regularity that this is just a bad move is known without being understood why!). AlphaGo is playing a move based on regularities unappreciated by human players. In fact, one of the creators later peers inside AlphaGo’s program and learns AlphaGo assigns a probability that a human would play this move at 1 in 10,000.

As the game unfolds, the brilliance of the move becomes clear. Lee Sodol later discusses this move:

Lee Sodol: I though AlphaGo was based on probability calculation and it was merely a machine. But when I saw this move, I changed my mind. Surely, AlphaGo is creative. This move was really creative and beautiful… This move made me think about Go in a new light. What does creativity mean in Go? It was a really meaningful move.

AlphaGo goes on to win the game, with move 37 eventually seen as a turning point.

I don’t the nature of the regularity that AlphaGo exploited. It might have been the kind of thing easily explained but something human players had simply missed for millenia. That strikes me as unlikely. It might have been something easy to explain (if AlphaGo had been trained to do so), but only exploitable if you have the capacities of AlphaGo (e.g., the ability to track dozens of positions in parallel). I prefer to believe it was something new: a regularity impossible to express in our favored forms.

Uncanny Genius

While neural networks are inspired by the brain, it’s not clear the extent to which our brains actually work that way. I don’t have the expertise to weigh in on this debate. But, to conclude, let’s assume the picture painted above is broadly applicable to the human brain. Doing so can provide a plausible explanation for the mysterious judgments of geniuses, when they simply intuit an answer with uncanny precision, unable to provide an explanation for their insight.

Earlier, I gave the example of how brain structures, organized like neural networks, could identify your grandma from a sea of faces. The interesting thing here is that we experience this as automatic. We just “know” that’s grandma, without access to the underlying categorization process in our own heads.

Our ability to just “know” Grandma’s face doesn’t strike us as particularly mysterious. But when well trained neural networks in our brains do less common things, it can seem mysterious and magical. An expert in a particular domain – mathematics, engineering, science – sees countless examples in their domain over a career. Each example trains their internal representation of the domain. Then, one day, facing a novel situation they just know what to do. And their inability to explain themselves leaves observers dumbstruck and in awe.

I’ll close with an example of this from Gary Klein’s study of firefighters (recounted in Superforecasting by Phillip Tetlock and Dan Gardner). An experienced firefighter commander is combatting a routine kitchen fire that is behaving a bit strange. The commander is seized with an uneasy feeling. He orders everyone out of the house. Moments later, the floor collapses. It turns out the true source of the fire had been the basement.

How did he know trouble was afoot? We can imagine the neurons and connections in the firefighter’s head were tuned by countless experiences with fire, until their structure encoded regularities in fires impossible to clearly express in rules, metaphor, or probabilities. Just like AlphaGo the commander was unable to explain how he knew to get out. He described it as ESP.

 

Leave a Reply

Your email address will not be published. Required fields are marked *