Category Archives: Uncategorized

Why is knowledge transfer hard in neural nets but easy with metaphor?

Neural networks (NNs) and metaphors are both ways of representing regularities in nature. NNs pass signals about data features through a complex network and spit out a decision. Metaphors take as a given that we know something, and then assert something else is “like” that. In this post, I am thinking of NNs as a form of representation belonging to computers (even if they were initially inspired by the human brain), and metaphors as belonging to human brains.

These forms of representation have very different strengths and weaknesses.

Within some narrow domains, NNs reign supreme. They have spooky-good representations of regularities in these domains, best demonstrated by superhuman abilities to play Go and classify images. On the other hand, step outside the narrow domain and they completely fall apart. To master other games, the learning algorithms AlphaGo used to master Go would essentially have to start from scratch. It can’t condense the lessons of Go down to abstract principles that apply to chess. And it’s algorithms might be useless for a non-game problem such as image classification.

In contrast, a typical metaphor has opposite implications: great at transferring knowledge to new domains, but of more limited value within any one domain.  Anytime someone tells a parable, they are linking two very different sets of events in a way I doubt any NN could do. But metaphors are often too fuzzy and imprecise to be much help for a specific domain. For instance, Einstein’s use of metaphor in developing general relativity (see Hofstadter and Sander, chapter 8) pointed him in the right direction, but he still needed years of work to deliver the final theory.

This is surprising, because at some level, both techniques operate on the same principles.

Feature Matching

Metaphor asserts two or more different things share important commonalities. As argued by Hofstadter and Sander, one of the most important forms of metaphorical thinking is the formation of categories. Categories assert that certain sets of features “go together.” For example, “barking,” “hairy,” and “four legs” are features that tend to go together. We call this correlated set of features the category “dog.” Categories are useful because they let us fill in gaps when something has some features, but we can’t observe them all.

This kind of categorization via feature tabulation was actually one of the first applications of NNs. As described by Steven Pinker (link to How the Mind Works), a simple auto-associator model is a NN where each node is connected to another. These kinds of NNs easily “fill in the gaps” when given access to some but not all of the features in a category. For example, if barking, hairy, and four legs are three connected nodes, then an auto-associator is likely to activiate the nodes for “hairy” and “four legs” when it observes “barking.” Even better, these simple NNs are easy to train. And if such simple NNs can approximate categorization, then we would expect modern NNs with hidden layers to do that much better.

Now, as I’ve argued elsewhere, proper use of metaphor isn’t as simple as matching features. The “deep features” of a metaphor are the ones that really matter. Typically there will be only a small number of these, but if you get them right, the metaphor is useful. Get it wrong, and the metaphor leads you astray.

But this isn’t so different from NNs either. NNs implement a variety of methods to prune and condense the set of features, almost as if they too are trying to zero in on a smaller set of “deep features.”

  • Stochastic gradient descent (a major tool to the training of NNs) involves optimizing on a random subset of your data in each period, rather than all the data. In essence, we throw some information away each iteration (although we throw away different information each time). Now, this is partially done to speed up training times, but it also seems to improve the robustness of the NN (i.e., it is less sensitive to small changes in the data set).
  • Dropout procedures involve randomly setting some parameters to zero during the optimization process. If the parameter isn’t actually close to zero, the optimization will re-discover this fact, but it turns out you get better results if you frequently ask your NN to randomly ignore some features of its data.
  • Information bottlenecks are NN layers with fewer nodes than the incoming layer. They force the NN to find a more compact way to represent its data, again, forcing it to zero in on the most important features.

So, to summarize. Using metaphor involve matching the deep features between two different situations. NNs are also trained to seek out the “deep features” of training data, the ones that are most robustly correlated with various outcomes. So why don’t NNs transfer knowledge to new domains as well as metaphors?

What are the Features?

It may come down to the kinds of features each picks out. As discussed in another post, the representations of NNs are difficult (impossible?) to concisely translate into forms of representation humans prefer. It’s hard to describe what they’re doing. So we can’t directly compare the deep features that a NN picks out and compare them to the deep features we humans would select.

However, image classification NNs give us strong clues that NNs are picking up things very different from what we would select. There is an interesting literature on finding images that are incorrectly classified by NNs. In this literature, you start with some image and you tweak as few pixels as little as possible to fool the NN into an incorrect classification. For example, this image from the above link is incorrectly classified as a toaster:

Figure 1. Fooling image classification neural networks (source)

How can this be? Whatever the NN thinks a toaster looks like, it’s obviously different from what you or I would think. The huge gap between the deep features we identify and those identified by a NN are best illustrated by the following images from the blog of Filip Piękniewski.

Figure 2. Filip Piękniewski trained a NN to tweak gray images until they were classified with high confidence (source)

Filip starts with gray images and trains a NN to modify pixels until a second NN gives a confident classification. The top left image is classified as a goldfist with 96% probability. The bottom right is classified as a horned viper with 98% probability. The results are kind of creepy, as they highlight the huge gulf between how “we” and NNs “see.” Even though metaphor and NN both involve zeroing in on the deep features of a problem, the features selected are really different.

Different Data, Different Features

[Warning: this isn’t my area but it is my blog so I’m going there anyway]

One reason figure 2 is so alien to us is that it comes from a very alien place. Compared to a human being, a NN’s training data is extremely constrained. Yes they see millions of images, and that seems like a lot. But if we see a qualitatively different image every three seconds, and we’re awake 16 hours a day, then we see a million distinct images every 52 days. And unlike most image classification NNs, we see those images in sequence, which is additional information. Add to that inputs from the rest of our senses, plus intuitions we get from being embodied in the world, plus feedback we get from social learning, plus the ability to try and physically change the world, and it starts to become obvious why we zero in on different things from NNs.

In particular, NNs are (today) trained to perform very well on narrow tasks. Human beings navigates far more diverse problems, many of which are one-of-a-kind. That kind of diverse experience gives us a better framework for understanding “how the world works” on the whole, but less expertise with any one problem. When faced with a novel problem, we can use our blueprint for “how the world works” to find applicable knowledge from other domains (figure 3). And this skill of transferring knowledge across domains is one that we get better at with practice, but which requires knowledge of many domains before you can even begin to practice.

Figure 3. “I gave it a cold.”

My earlier post on the use of metaphor in alchemy and chemistry illustrates how a better blueprint for “how the world works” can dramatically improve feature selection. Prior to 1550, alchemists used metaphor extensively to guide their efforts, but it mostly led them astray. They chose metaphors on the basis of theological and symbolic similarities, rather than underlying interactions and processes. This isn’t a bad idea, if you think the world is run by supernatural entities with a penchant for communicating revelations and other hidden knowledge to mankind. But a better understanding of “how the world works” (i.e., according to impersonal laws) allowed later chemists to choose more fruitful metaphor than the alchemists.

When I see something like Figure 2, I see an intelligence that hasn’t learned how the world “really is.” Animals and physical objects are clumps of matter, not diffuse color patterns, no matter how much those color patterns align with previously seen pixel combinations. But I can see how it would be harder to know that if you hadn’t handled animals, seen them from different angles, and been embodied in physical space.

So I think one reasons human metaphor transfers knowledge so well is that it has so much more diverse training data to draw on. We pick deep features with an eye on “how the world works.” So why don’t AI companies just give their own NNs more diverse training data? One reason is that important parts of the structure of NNs still have to be hand-tuned to the kind of training data. You can’t just let loose an image classification problem on the game of Go and expect to get comparable results. There seems to be a big role for the architecture of NNs.

Whatever the “right” architecture is for the diverse training data humans encounter, evolution seems to have found it. But it took a long time. Evolution worked on the problem for hundreds of millions of years in parallel over billions of life forms. For contrast, AlphaGoZero played 21mn games of Go to train itself. At one hour per game, that works out to a bit under 2,400 years, if the games were played at human speed one at a time.

In a sense, I think this makes NNs more impressive – look how much they’ve done with the equivalent of a paltry 5,000 years of evolution! But I also think it provides a warning that matching broadly human performance might be a lot harder than recent advances have suggested.

 

Faking Genius

Geniuses are rare in life, but common in fiction. No offense to our writing class, but I suspect a lot of these fictional geniuses are written by smart-but-not-genius writers. But how can this be? How does a non-genius author write a genius character? If the character is smarter than the author, then their thoughts and decisions are, by definition, the kind of things the author wouldn’t think of when in that situation!

How do you fake genius? I’ve noticed three strategies authors use.

House, M.D.

The Genius Who Knows Lots of Stuff

This is the most common and, to me, most annoying strategy. It treats geniuses as little more than people who know lots of facts. I haven’t watched that much House M.D., but from what I’ve seen he’s an archetype of this format. Someone comes in with weird symptoms and House is the only one who knows about the rare disease that matches the symptoms. He is a walking storehouse of weird disease trivia (I know, I know, there’s more to him than that, it’s an illustration not a criticism of the show).

This is a pretty easy strategy for a writer to implement. The writer just uses google and a bookshelf to give the genius a torrent of factoids to say. But it’s also the strategy that leaves me cold, precisely because it’s easy to implement. It’s no more illuminating than flipping through an old set of Trivial Pursuit cards.

A twist on this type is the genius who knows which facts are the right ones. In this case, the author lays down the “real” clues, but then buries them under a pile of extraneous detail and red herrings. The author then makes the genius (usually a detective here) able to sniff out the real clues from the red herrings. The veracity of the “real” clues is proved when they solve the problem. Maybe they point to a villain who confesses or tries to kill the protagonist when outed. Or maybe they point to a treatment that cures the patient. Afterwards, the audience is satisfied that the real clues were there, ready to be seen, but we stand in awe of the genius’ “ability” to see what we had missed.

Geordie La Forge

The Genius Only Intelligible via Metaphor

The next type of genius is so much smarter than us that his speech is incomprehensible. We, the audience, are like dogs trying to understand humans. We recognize some of the words (frequently the word is quantum), but their connections are baffling. Frustrated, the genius then explains the gist of his idea with a simple metaphor that we can understand. Often the genius has to be prompted by someone saying “in English please!”

Star Trek’s tendency to do this was lampooned on Futurama (episode 412, “Where No Fan Has Gone Before”):

Leela: I didn’t wanna leave them either, Fry, but what are we supposed to do?
Fry: Well, usually on the show someone would come up with a complicated plan then explain it with a simple analogy.
Leela: Hmm. If we can re-route engine power through the primary weapons and reconfigure them to Melllvar’s frequency, that should overload his electro-quantum structure.
Bender: Like putting too much air in a balloon!
Fry: Of course! It’s so simple!

Star Trek is hardly the only party guilty of this trick. The Marvel movies do this when Bruce Banner and Tony Stark talk, for example. It’s not absent from more high-brow stuff either (e.g., Dr. Shevek’s explanations of his new physics in Ursula K. Le Guin’s The Dispossessed). This strategy seems to be used a lot in science fiction, precisely because in that genre we are dealing with technologies that haven’t been invented. If the author could explain exactly how they worked, then it wouldn’t be science fiction!

I think this method can actually be very effective. I am fond of this example, from the independent movie Travelling Salesman. In the movie, one of the protagonists has cracked the P versus NP problem. In brief, a proof related to the problem would allow allow us to solve difficult problems (like finding the factors of large numbers) at super speed. This is an open problem, so of course the writers can’t describe how it would really be solved. Instead, they use the following metaphor:

Tim: What if I took something like a quid coin, ok, and I buried it in the [desert]? It’s buried, you have no idea where it is, and I ask you to find it. How long would that take you?
Hugh: (scoffs) well-
Tim: Years, right, I mean millions of years if the desert were big enough.
Hugh: Sure
Tim: What if I melted the sand? Took all the sand in the desert and melted it. Glass. The whole desert becomes one big sheet of glass. So now finding the coin is easy, right? You just– you see it floating there. Change the sand to glass and finding the coin is trivial.

The metaphor conveys the idea that the genius has found a way to peer through all the complexity of a problem and see straight to the answer. But the writer doesn’t actually have to explain how it’s done.

This strategy is easy enough to write. You need a lot of complicated technical-scientific-literary buzzwords. You need a metaphor for the genius’ idea, but you don’t have to have the details worked out. Then you just alternate between the two modes of expression as needed. It’s kind of the empty calories of insight, because it gives the feeling of understanding without the reality, but I still prefer it to “the genius who knows lots of stuff.”

Ender Wiggins

The Handicapped Genius

A final type of genius is more satisfying, at least to me. In this case, the genius operates under a handicap so that exhibiting high (but not genius) intelligence by modern standards is itself proof of genius. A great example is Ender Wiggins, from Ender’s Game. In the book, Ender has a number of cool insights about warfare in three dimensions, and in general exhibits adult level intelligence. But he’s only six years old! A six year old exhibiting adult level reasoning is believable as a genius.

Another common twist is to put your genius in the past and have them be ahead of their time. The character of Thomasina in Arcadia is an example. In the 1800s and without the aid of computers, Thomasina discovers fractals (actually discovered by Benoit Mandelbrot in the 1970s). When the character is ahead of their time, the author has an excuse to illustrate the derivation of actually brilliant things (like fractals). But because these things have been discovered, with a bit of research, the author can learn how they were initially derived and then just copy that for their genius.

This strategy is more satisfying to me than the others because it usually exhibits reasoning from A to B, illustrates the connections between ideas, and so on. The facts aren’t just a torrent, but form a web of relationships. And for characters who are ahead of their time, you might get an idea of what it’s like to be inside the mind of genius. The catch is, you are actually reading a sort of disguised biography of whatever genius discovered the thing (like fractals) we are pretending was discovered by the fictional genius.

How to Fake Genius

So that’s how I’ve seen it done. Despite the tone of the above, I actually don’t think these are bad places to start.  These strategies do get at some truths about geniuses: they do know a lot of stuff and they frequently are unintelligible unless they talk down to us using simple metaphors. Read a pop-physics book for copious examples.

But I would love it if we could go further. If I could feel what it’s like to really be in the head of a first class mind, that would be great. Can we do better? I’m not a writer of fiction, a genius, or even a psychologist, but I have done some research on “innovation,” so I’m not 100% unqualified to make some suggestions. Specifically, I think a good genius character should have the following characteristics:

  1. Geniuses know many things.
  2. Geniuses think with both speed and endurance.
  3. Geniuses think clearly.
  4. Geniuses have a lot of working memory.
  5. Geniuses make unusual connections between disparate concepts.

Most of these traits are not that hard to fake with time and tools. Let’s take them in turn.

1. Geniuses know many things

This is the one nearly everyone gets right, so I’ll be brief. Google and libraries are your friend. A team of writers can pretend all their accumulated knowledge fits in one genius’ head.

2. Geniuses think with both speed and endurance

Another easy one to fake. The writer can ponder the perfect witty retort for their genius for an hour, a week, or a year. But when they put pen to paper, it will seem as if it was instantly on the genius’ lips.

Geniuses are also frequently capable of intense focus for long periods of time. The author can afford to be scatter-brained, as long as they have more time than the genius to ponder. The audience need not know that one day of focused attention by the fictional genius took the writer a few months of scattered attention.

3. Geniuses think clearly

By this, I mean that geniuses don’t make weak arguments and logical errors. Unfortunately, as laid out at length in Mercier and Sperber’s The Enigma of Reason, individuals have a hard time objectively evaluating the strength of their own arguments. This is bad news, because Mercier and Sperber also present evidence that humans are quite good at objectivley evaluating the arguments of others. If the author gives their genius a bad argument, the audience is more apt to spot it than the author.

Fortunately, we can use the same trick to our advantage. A good way to ensure your genius’ arguments are strong is to find a partner (or multiple partners) who you can talk them over with. A group of debaters, each of whom is individually biased towards their own argument, can nonetheless form a very clever collective intelligence because they can objectively evaluate each other. An author can, however, bring these disparate voices into the head of a lone genius to make the collective mind a singular one.

4. Geniuses have a lot of working memory

On average people can hold about 7 pieces of information in their head at the same time, some more and some less. Geniuses, I presume, can hold more. This is important because it’s much easier to see connections between ideas that are held in working memory. Thus, geniuses can perhaps see how larger sets of facts are connected to each other.

Now we are getting into terrain a bit harder to fake. Pen and paper are useful tools for keeping facts close to hand if not in your brain. You can work out the genius’ idea with lots of time and paper (including a lot of paper that is discarded) and then pretend it all happened in their head. Another possible technique is to “chunk” several pieces of information into a broader concept, so it can be worked with. This takes longer than it would for a genius (you have to spend time understanding the chunked concept), but it’s a price of faking genius.

5. Geniuses Make Unusual Connections Between Disparate Concepts

This is hardest to fake. One possibility is to mine your own life for the top 2 or 3 epiphanies and then to reverse engineer a scenario for them to emerge. It helps if you keep a record of your thoughts. Alternatively, you might pick a few disparate subject areas, read deeply in them, and attempt to harvest a surprising connection or two. Again, reverse engineer a setting for those connections to emerge. In either case, the goal would be to make it seem as if these kinds of realizations are ordinary events for the fictional genius.

Fake Geniuses Among Us

It’s irresistible to wonder if we can’t use similar tricks to fake genius in our real lives. I think it’s not only possible but common. Indeed, this is the kind of thing academics and scientists do all the time. We cite things we haven’t read carefully. At seminars, only one person presents, even if the work came from many. Our papers omit the missteps, dead-ends, and other frustrations of research. There’s no place in a paper’s methods section to write “then I thought about the problem off-and-on for two years.” We talk our ideas through at length with our colleagues. We use computers and paper to augment our paltry memory. And we pick and choose research questions that are well suited to the weird ideas we want to explore.

If there’s a larger point, it’s this: I am suspicious of the notion that the difference between us and geniuses is one of kind and not merely of degree. I am suspicious that they can ever be incomprehensible, so long as we give ourselves sufficient time and tools to work out their thoughts. Brainpower, time, and thinking tools are all inputs into great ideas, but to a large degree I think we can substitute the latter two for the first.

 

Finding the Right Metaphor

Last post introduced the hypothesis that having more metaphors equips people to innovate, because it expands their set of tools for thinking about novel settings. It presented some reasons to think this is true. This post pushes back on that simplistic hypothesis. Having the right metaphor is only half the battle; you also have to find it.

Could it be that having more metaphors is sometimes a handicap? First off, having more metaphors to search through could make it harder to find the right one. Second, having more metaphors might cause a problem analogous to over-fitting. You might have a metaphor that fits many of the surface details of the problem at hand, but not the smaller number of crucial “deep features.”

Metaphors in Alchemy and Chemistry

This is an interesting idea tossed out by David Wootton in The Invention of Science.

For classical and Renaissance authors every well-known animal or plant came with a complex chain of associations and meanings. Lions were regal and courageous; peacocks were proud; ants were industrious; foxes were cunning. Descriptions moved easily from the physical to the symbolic and were incomplete without a range of references to poets and philosophers. [p. 81]

Wootton suggests this formed a sort of reasoning trap. Every potential metaphor had so much baggage in the form of irrelevant features that it became hard to use them well.

I think the use of metaphors in alchemy provides a good illustration of this phenomena. Gentner and Jeziorski (1990) compare the use of metaphor/analogy in alchemy (prior to 1550) and chemistry (post 1600s). Compared to chemistry, alchemy seems to be endlessly led astray by metaphors chosen on the basis of red herring associations.

As explained by Gentner and Jeziorski, a goal of alchemists was to transmute base metals into more valuable ones via the “philosopher’s stone.” This stone was often called an egg, since eggs symbolized the “limitless generativity of the universe.” From Genter and Jeziorski, a sampling of alchemical thought:

1. It has been said that the egg is composed of the four elements because it is the image of the world and contains in itself the four elements…
2. The shell of the egg is an element like earth, cold and dry; it has been called copper, iron, tin, lead. The white of the egg is the water divine, the yellow of the egg is couperose [sulfate], the oily portion is fire.
3. The egg has been called the seed and its shell the skin; its white and its yellow the flesh, its oily part, the soul, its aqueous, the breath or the air.

As an egg is composed of three things, the shell, the white, and the yolk, so is our Philosophical Egg composed of a body, soul, and spirit. Yet in truth it is but one thing [one mercurial genus], a trinity in unity and unity in trinity – Sulphur, Mercury, and Arsenic. [p. 12]

Even as brilliant a mind as Isaac Newton got lost in the alchemical thickets of meaning. Mercier and Sperber give us this sampling of his alchemical musings:

Neptune, with his trident leads philosophers into the academic garden. Therefore Neptune is a mineral, watery solvent and the trident is a water ferment like the Caduceus of Mercury, with which Mercury is fermented, namely, two dry Doves with dry ferrous copper. [p. 326]

Gentner and Jeziorski contrast this style of thinking with the use of metaphor/analogy by later chemists. For example, Robert Boyle, trying to illustrate how individually minute effects can have large-scale effects in aggregate, uses metaphors of ants moving a heap of eggs and wind tugging on the leaves and twigs of a branch until it snaps off (Gentner and Jeziorski p. 8). Sadi Carnot uses the metaphor of water falling through a waterfall to understand the flow of heat through an engine.

What’s interesting is that neither Boyle nor Carnot used a novel metaphors that had been unavailable to the alchemists (although the math associated with Carnot’s model was new). It wasn’t that the availability of new metaphors that differentiated chemists from alchemists. It was the selection and use of existing ones.

It’s About Selection

Metaphors are an effective form of amateur modeling. When facing a novel situation, we take a leap of faith that a similar and known situation will serve as a useful map of the unexplored terrain. Having more metaphors at your fingertips increases the chances one of them will be a good map for the situation. But the ability to find this metaphor matters nearly as much as having it. We need a library that is large but also organized.

Hofstadter and Sander argue metaphor selection is the real test of domain expertise. There are a lot of dimensions along which a metaphor can match the thing to be explained. I’ve written before that its the deep features that need to be matched, but there aren’t universal guidelines for differentiating a deep feature versus from a surface feature. Alchemists thought surface similarity to an egg was the important feature for proto-chemical work. That seems silly to us now, but I suspect that’s because we assume the world operates according to impersonal laws. If you believe the world is instead run by supernatural entities with a penchant for communicating revelations and other hidden knowledge to mankind, then any features with theological symbolism probably seems like the deepest ones of a problem.

I can’t resist one more example. Crosby, in his excellent book on quantification in Europe over 1250-1600, makes a similar point about something as seemingly association-free as numbers:

Western mathematics seethed with messages… in the Middle Ages and Renaissance. Even in the hands of an expert – or, especially, in the hands of an expert – it was a source of extraquantitative news. Roger Bacon, for instance, tried very hard to predict the downfall of Islam numerically. He searched through the writings of Ma’shar, the greatest of the astologers who wrote in Arabic, and found that Abu Ma’shar had discovered a cycle in history of 693 years. That cycle had raised up Islam and would carry it down 693 years later, which should be in the near future, Bacon thought. The cycle was validated in the Bible in the Revelation of St. John the Divine 13:18, which Bacon thought disclosed that “the number” of the Beast or Antichrist was 663, a number certain to be linked to other radical changes. [p. 121]

Nevermind that the number of the beast is 666 – Bacon’s Bible apparently had a typo – and that neither 666 nor 663 is equal to 693! In this era, those were not “deep features” of metaphorical similarity.

Hofstadter and Sander essentially say practice and subtle skills determine how good an expert is at selecting the right metaphor. For many domains, that is surely correct. But I see two other tricks for organizing our library of metaphors.

Unification

The first trick is unification. If we can subsume individual cases under more and more universal ones, then we reduce the number of metaphors we have to search through. We also reduce the risk that we overfit to a metaphor with many surface similarities. To stick with our metaphor of a metaphor library, this is like keeping one book from a shelf.

For Hofstadter and Sander, this is essentially what categorization is. They give the example of the category mother emerging from a child’s encounter with more and more people who are somebody’s mother. Whereas the child may initially have to remember a large set of disconnected facts  – Rachel is a mother, Sarah is a mother, Thomas is not a mother, Rebecca is not a mother, etc. – over time the category of mother emerges. When a new person is encountered, the child no longer has to mentally compare them one by one with Rachel, Sarah, Thomas, Rebecca and others. Instead, it can quickly fill in a lot of gaps in its knowledge about the person if it finds out she fits in the mother category.

This is common in science, where models belong to the same family as metaphors (link). For example, many results in economics might initially be derived using a specific functional form. We might assume demand is given by the equation q = A – Bp, where p is a price and A and B are positive numbers. Or we might assume its given by q = ABlogp or q = Ap^-B. This requires us to carry around all these different equations in our minds. Life is much simpler when we are able to generalize our result to any continuous function where demand is falling in price.

You can see the same push for unification across most domains. It’s probably taken to its extreme in physics, where the quest for a single unified theory for the universe is taken to be a holy grail. From my outsider’s perspective (correct me if I’m wrong), historians seem to lie on the opposite end of the spectrum. In that field, detail matters.

Source: XKCD

With unification, we sacrifice details but we hopefully get the big picture right. It’s usually worse to get the big picture wrong than to get the details wrong, and since unification helps us zero in on the right “big picture” metaphor, it’s valuable. But unification becomes a problem when the knowledge domain resists it. This can happen when the details matter.

Systemization and Dani Rodrik’s Growth Diagnostics

When you can’t subsume everything under one case, you have to organize the cases. It’s fairly common to organize metaphors into big categories and then leave it at that. I’ve done that myself, grouping the representations used by innovators into five categories: rules, probabalistic statements, metaphors, neural networks, and instantiations. But you can also create rigorous processes for sorting through these categories and pinpointing the right metaphor for a given situation. The best example of this that I know of is Dani Rodrik’s growth diagnostics.

A bit of context is necessary to explain this. Rodrik is (among other things) a development economist. His growth diagnostics is a process for finding the correct economic model in that context. The problem development economists try to solve is why some national economies fail to grow at desired rates. Economic growth is a very complex and poorly understood phenomena, and there are many competing models of the process. Each of these models is a simplification, and each provides different kinds of policy advice . One might emphasize the rule of law and secure property rights, another investment in education, and a third subsidies for favored industries. Rodrik’s growth diagnostics helps you pinpoint the model most applicable to the setting. Access to a good model then allows you to think through what the impacts of various policies might be.

Figure 2. Growth Diagnostics (from One Economics, Many Recipes by Dani Rodrik)

Growth diagnostics is basically a decision tree (figure 2). It starts at the top with the problem: insufficient private investment and entrepreneurship. Rodrik divides the possible causes of this into two categories: a low return to economic activity, or a high cost of finance. He then provides some suggestions for what kinds of evidence to look for to determine which is the case (e.g., “are interest rates low?”). Suppose we have decided there is a low return to economic activity. Moving down the tree, is the problem that there aren’t socially beneficial things to do (low social return), or merely that the private sector cannot find a way to make useful things profitable (low appropriability)? Again, Rodrik suggests specific things to look for to help you determine which branch of the tree you should descend to.

Following the tree gets you down to a simple economic model of what is constraining growth. Economics has failed to discover a single model to explain everything, but with a procedure for finding the right model, it remains useful. It illustrates how a well-stocked library of metaphors can be made maximally useful whenc combined with a framework for zeroing in on the right one.

Summary

Innovation, by definition, requires stepping out of the familiar and into the unknown. These sojourns go better for us when we have a map of the territory. Metaphors can serve this function; they assert the unknown is “like” the known. Having more of these maps is useful, because it’s more likely one of the maps is a good fit (link). But this is only true if we can find that map. Maps with too many details can lead us astray, because the details might fit really well but the big picture is off. If having more maps is going to be useful, we need to organize them. One way is to prune our collection to a small number of maps with only the most important details. The other is to create a meta-map over our maps: a process for determining the deep features of the situation and matching them to the corresponding metaphor.

 

Do Metaphors Make Us Smarter?

One of the ways we navigate a world full of novel situations and problems is by using metaphors. When facing a new situation or problem, we take a leap of faith that things we don’t know are like things we do know. We search through our memory and find a situation that is “similar” to the one at hand. We use that previous experience as a metaphor for the current one. If it’s a good metaphor, it’s a roadmap through the unknown.

An  implication of this is that having more metaphors and more diverse metaphors is a powerful asset for thinking. All else equal, having more metaphors increases the chances one of them will be a good fit for your current problem. In this post, I’ll present some arguments that suggests this is the case. The next post adds some nuance:  our ability to navigate our personal library of metaphors matters as much (or more) as its size.

Three Cheers for Metaphors

I’m unaware of any study that directly compares innovation and the number of metaphors (drop me a line if you do). But there are a few lines of evidence that strike me as at least consistent with the theory.

1. Metaphors to Solve Problems

The closest thing we have to a direct test of the theory are psychology experiments.

  1. Give some subjects a metaphor well suited to solving a problem.
  2. Don’t give it to a control.
  3. See if the metaphor-equipped group is better at solving the problem.

Gink and Holyoak (1980) is an early example of the type. The authors asked study participants to solve the following problem. A patient has a stomach tumor that must be destroyed. No surgery is permitted, but a beam of radiation can destroy the tumor without operating.  The problem is that any beam strong enough to kill the tumor is also strong enough to kill the tissue it must penetrate to reach the tumor. Any beam weak enough to leave the healthy tissue unharmed is also too weak to destroy the tumor. What should the doctor do?

While you ponder that, let me tell you another story. Totally unrelated, I swear. Once upon a time there was a general who wanted to capture a city. His army was large enough for the task, and there were many roads to the city. Alas, each road was mined. Any force large enough to take the city would detonate the mines as it moved down the road. A smaller force could move down the road without detonating the mines, but would be too small to take the city. What to do?

Fortunately, the general came up with a solution. He divided his army into many smaller divisions, and sent each down a separate road. Each division was too small to detonate the mines, and so they all converged on the city at the same time, and captured it.

Wow, what a neat story.

Now, have you figured out the tumor problem?

The trick is to use many weak beams of radiation, each pointing to the tumor from different directions. They should all intersect at the tumor’s location but nowhere else. Their combined energy will destroy only the tumor, and not the healthy tissue that must be penetrated.Figure 1. Converging armies of radiation

The preceding is basically the experiment that Gink and Holyoak perform. They give people the tumor problem. Only 2 in 30 people were able to solve it on their own. Then they tell them the story of the general. This story’s solution is a tailor-made analogy for the tumor problem (converging weak forces), and 14 of 35 people were able to solve the tumor problem if they got the general story. An additional 12 people came up with the solution after given a hint to use the story to figure out the tumor problem.

A meta-analysis of 57 similar experiments obtains similar results. The intervention reliably has a small-to-medium sized effect. Giving people a solution wrapped up in a metaphor helps them find the solution. A bit.

2. Metaphors and Forecasting

Do these lab results carry over into real world settings? Forecasting  forms a set of problems that are not contrived but do have definite “correct” answers. Phil Tetlock has been asking people to make political and economic forecasts for decades, and tracking the results.  Drawing on a very old dichotomy, Tetlock classifies his forecasters as either “foxes” or “hedgehogs.” The idea is that “the fox knows many things and the hedgehog knows one big thing.” Tetlock consistently finds that foxes are better forecasters than hedgehogs.

Now, this isn’t really a direct test of the hypothesis. Tetlock isn’t “counting metaphors” for his forecasters. He classifies his thinkers as foxes or hedgehogs based on a series of questions about cognitive thinking style. These include agreement with statements such as “even after making up my mind, I am always eager to consider a different opinion” and “I prefer interacting with people whose opinions are very different from my own.” In general, Tetlock uses language like “hedgehogs view all situations through the same lens” and “foxes aggregate information, sometimes contradictory information, from many different sources.”

However, there is evidence that foxes make use of more and different analogies. Tetlock notes “foxes were more disposed than hedgehogs to invoke multiple analogies for the same case (Tetlock, p. 92).” And it seems to help them navigate novel situations.

3. Metaphors and the Individual

There are also some observational studies consistent with the idea that more metaphors make us smarter.

  • Highly creative people (which we can use as a proxy for innovation) tend to be open to new experiences and curious (Sawyer, p 64). These are two channels through which people may acquire additional concepts that can be used as metaphors. People who have lived abroad
  • Multicultural individuals show more creativity. Living in a different culture is, of course, a rich source of new metaphors.
  • Scientific “geniuses” tend to have broad interests (Simonton, p. 112): they have more diverse hobbies (painting, art collecting, drawing, poetry, photography, crafts, music) and are voracious readers, including extensive reading outside their main discipline. Again, this is hardly a direct measure of the size of their metaphor libraries. But broad interests would tend to foster a larger and more diverse set of potentially useful metaphors.

Of course, alternative and additional explanatory factors are also possible. But these threads are at least consistent with the story that having access to more metaphors facilitates innovation.

4. Metaphors over Time

It’s not hard to establish that the set of metaphors has grown over time. In their book on metaphor, Hofstadter and Sander compile an illustrative list of concepts unavailable to most generations of humanity. Each of these is available as a metaphor to people alive today, but not to people living, say 100 years ago. Here are 25 examples from their list of over 100 (p. 129):

  • DNA
  • virus
  • chemical bond
  • catalyst
  • cloning
  • email
  • phishing
  • six degrees of separation
  • uploading and downloading
  • video game
  • data mining
  • instant replay
  • galaxy
  • black hole
  • atom
  • antimatter
  • X-ray
  • heart transplant
  • space station
  • bungee jumping
  • channel-surfing
  • stock market crash
  • placebo
  • wind chill factor
  • greenhouse effect

If these concepts are a better metaphor for some novel situations, then denizons of the modern world have a leg-up on their ancestors. They are equipped with a bigger toolbox for handling novelty. This could be one factor explaining the Flynn effect, whereby the IQ scores on standardized tests rise in each generation.

Additionally, there are many scientific theories whose discovery depended on metaphors that were not always available. The proliferation of clocks in Europe may have made Europeans amenable to thinking of the universe as a machine following strict natural laws, instead of the whims of spirits (Wootton link). Neils Bohr used the heliocentric model of the solar system as a metaphor for the atom – a metaphor that would have been basically unavailable before Copernicus. And Einstein used principles derived from Newton’s classical physics to guide his hunt for the theories of special and general relativity. Again, the evidence is at least consistent with more metaphors being an asset to thinking.

5. Metaphors across Geography

More controversially, some have speculated that similar channels explain the correlation between economic prosperity and test performance (IQ, standardized math and others). GDP per capita is positively correlated with the average performance of a nation on standardized tests. A lot of people argue that this is because human capital/intelligence/IQ (whatever you want to call it) leads to economic prosperity (more innovation, better policies, more cooperation, more patience, etc.). But the causal arrow could just as well point in the opposite direction. Countries with more economic prosperity tend to have more complex economies (link to Hidalgo), more literacy, and greater access to digital information. All three of these channels may well expose the typical citizen to a more diverse set of concepts and processes. And these concepts and processes will then be available as metaphors. This bigger library of metaphors could then be a reason people in these countries do better on standardized tests.

Finally, just as there is some evidence multicultural individuals are more creative, countries with populations from lots of different places tend to have more patent intensity and economic prosperity. Again, this is hardly a direct test, but we can imagine people from different countries bring different sets of metaphors with them. A country with people from many different countries might have a more diverse set of metaphors, which could partially account for their higher performance in innovation.

A Virtuous Circle?

Item #1 above provides the most direct evidence that access to metaphors facilitates problem solving. The remaining items all show that more diverse people, information, interests, and concepts tend to lean in the same direction as various metrics for innovation. Correlation isn’t causation, but it’s possible the set of available metaphors is a causal link between the two. If that’s the case, then as society gets more metaphors it gets better at innovating. Maybe we are living in a virtuous cycle where innovation leads to social complexity and social complexity leads to a wider set of metaphors and access to a wider set of metaphors leads to innovation!

Figure 2. A Virtuous Circle?

I think there is some truth in that story, but also that it’s only part of the story. Maybe a small part. For items #2-#5 above, the evidence is pretty indirect. We don’t know if people really do expand their set of metaphors via the hypothesized channels, and we don’t know if they use those metaphors to innovate. Lots of other things are going on, and we don’t know how much those other factors matter. We also can’t be sure this isn’t a spurious correlation wherein “smart” people have diverse interests, but these don’t inform their ability to innovate.

Item #1 provides the most direct evidence that metaphors are causally related to innovation. However, even when we give people a perfect metaphor right before we test them, the effect is not large. And if we wait a day to test them, performance declines. It appears that there is a lot more to innovation than simply having access to the right metaphor.

Staying in the “metaphor” or having a library of metaphors, a major problem might be our ability to search the library. Which metaphor is the right one for a given problem? This is a problem we turn to in the next post…

 

Innovation: Why do Metaphors Work?

The world is full of regularities. One way we encode information about these regularities is with metaphor.  When I write about “metaphor”, I don’t mean poetic comparisons. In this post, metaphors or analogies (I use the term interchangeably) encode information about regularities. They assert something we don’t understand is similar to something we do. Innovators then use these metaphors as maps to guide their travels in the unknown.

Some famous examples:

  • Using Uber as a template for a new kind of business (“uber for X”)
  • Applying the lessons of lean manufacturing to start ups (the “lean start up” model)
  • Viewing the atom as a miniature solar system (the Bohr model)
  • Taking the principle of “no privileged frame of reference” to accelerating and speed of light travel (special and general relativity)

In each case, a more familiar domain (existing business or scientific models) is assumed to apply in a domain that differs in fundamental ways.

This is utterly commonplace. But if you step back a bit and think about it, it’s puzzling.  Why does this leap of faith ever work? More specifically, why does it work better than chance? In some cases, we can maybe assert that the same phenemena underlie the two cases. For example, maybe all “Uber-type businesses” rely on the same underlying regularities. In that case, using Uber as a metaphor for another Uber-type business is really just a way of drawing lessons from the broader category of “Uber-type businesses.”

But I think there are many more cases where the examples do not appear to draw on the same underlying phenomena in a meaningful way. The Bohr model is particularly egregious; why should the behavior of planets have anything to do with the behavior of the atom? In fact, they differ in really important ways; yet it was a fruitful metaphor.

Before we answer these questions, let’s take a detour.

Economic Modeling

I’m an economist. Much of my professional life has revolved around little mathematical models of social phenomenon. These models are simplistic. They are hard to understand without training. And they don’t give us predictive ability with anything like the accuracy of physics. So why do we bother?

In a wonderful little book on economic methodology, Dani Rodrik provides some reasons. First off, the reason economists use math is not because we are so clever, but rather because we are so dumb. Math forces a model to have internal consistency. You have your assumptions, you have your conclusions, and there are unamibiguous rules for deriving the one from the other. Many things that seem obviously true when expressed in language are revealed to be internally inconsistent when expressed in math.

That explains the math, but not the simplicity. Why not more complicated and realistic models, built from math? There are a few reasons.

First, let’s discuss why simple models work at all in a complex world. Let’s assume the outcomes of any social process are derived from the interaction of underlying factors. There might be a huge number of these factors, and they can interact in all sorts of different ways. However, there’s no reason to believe these factors are equally important to the outcome. In all cases, some features will matter more than others. If a small number of interacting factors plays a big role relative to the others, then understanding those goes a long way to understanding the situation. If not, we label the problem “chaotic” or “random” or “complex.”

So there’s a selection issue at play. In most cases, a small number of factors will matter more than the rest. If we model those and say the rest is random, then we will do about as good as we can. In cases where a small number of factors is not sufficient to make meaningful predictions, we just don’t model it and we just make decisions at random.

So simple models can be useful. But shouldn’t a more complex model be even more useful? The answer would be yes, if we started with the correct simple model and then added some complexity. The problem is that as models get more complicated, it gets harder and harder for practitioners to identify which features matter most. The second reason economists use simple models (according to Rodrik), is to isolate important causal mechanisms. If you are going to get something right, it better be the most important part. You want to get the skeleton right, so to speak, not the elasticity of the skin.

What do you want to get right? Rodrik uses the term “critical assumptions.” For Rodrik, the “critical assumptions” in an economic model have a specific meaning. It is those assumptions whose modification produces substantive differences in the model’s conclusions. For example, if you want to know what will happen to employment when you raise the minimum wage, you can choose between at least two models. In a perfectly competitive model, an increase in the minimum wage will lower employment. But in a model where firms have market power over hiring (a monopsony model), an increase in the minimum wage may raise employment. Both models are “correct” so long as their critical assumptions are met. In this case, the critical assumptions pertain to the degree of market power for hirers.

Unlike particle physics, the social world is too complex to capture with “the one true model.” Instead, theoretical economics advances by adding many simple models to the library of economic knowledge. Simple models are preferred because they are frequently good enough and because the art of an applied economist is knowing which model to use. Simplicity makes it easier to choose the right model. A good economist understands the critical assumptions that must apply for a model’s predictions to play out in the real world.

Metaphors as Amateur Models

Back to metaphors. What makes for a useful metaphor? A metaphor needs to match features of the object/event to be explained. However, in any real world situation, there are a huge number of features that you could use to match. There are a correspondingly large number of possible metaphors. Good metaphors match the deep/structural features, rather than the surface ones. For example, suppose we encounter the following animal:

Figure 1. An Unknown Animal. What is the best analogy?

We’ve never encountered this creature before. We’re in the unknown. But we can make some reasonable predictions about it’s behavior by drawing analogies with things we do know. And we have a lot of features for metaphor/analogy. Surface level analogies such as the following may lead us astray:

  • Size: The animal is as big as a whale. Inference: like a whale, the animal is probably harmless.
  • Covering: The animal is feathered like a bird. Inference: like a bird, the animal does not view us as a food source.
  • Color: The animal is brown like my dog. Inference: like my dog, the animal is a friendly ally.

In contrast, the deep/structural feature that matters most is:

  • Predator: The animal is a large predator. Inference: like a bear/lion/alligator, I am in danger!

What makes a feature “structural?” In their exhaustive book Surfaces and Essences: Analogy as the Fuel and Fire of Thought, Hofstadter and Sander write:

In the case of problems to be solved, structural features are those whose alteration would change the goal of the problem or the pathways to solving the problem. [p. 340]

So the deep features are the ones that, if different, would make a big difference. An alternative perspective on what makes a good metaphor is about “structural” features. In The Stuff of Thought, Steven Pinker argues:

the power of analogy doesn’t come from noticing a mere similarity of parts… It comes from noticing relations among parts, even if the parts themselves are very different. [p.254]

Again; the key is to match the “deep” features, not surface similarities.

This language is remarkably similar to Rodrik’s thoughts on good economic modeling. Just as economists have a large set of models to choose from, we all have a practically boundless set of possible metaphors. And just as the key to a good economic model is getting the critical assumptions right, the key to a good metaphor is getting the structural features (and their interactions) right. Hofstadter and Sander even define structural features in a way quite similar to Rodrik’s critical assumptions. Both are the features that can’t be changed without significantly impacting your inference.

Indeed, modeling and metaphor appear to be part of the same extended family. Metaphors are like amateur models. And the utility of models in economics is that they serve as useful metaphors for complicated social phenomena real world.

Why do metaphors work at all?

If modeling and metaphor belong to the same family, then this provides some insight into why metaphors can be so useful. Economic modeling is a practice designed to give us (limited) insights in very complex settings. It does this by stripping things down to a small number of important causal mechanisms.

Just as with social phenomena, in any real world situation where we are searching for a metaphor, there will be a huge number of potentially relevant factors. By chance, some of these will matter more than others in determining the outcome we care about. Those are the ones we should get right. We do that by matching the deep features of the object to something we already understand and which shares the same deep features.

Metaphors work because in situations that we don’t shrug off as fundamentally unpredictable, a small number of features interact and drive the outcome. When metaphor works, I suspect it’s because the number of situations in the universe where a small number of features matter is much larger than the number of qualitatively different ways a small set of features can interact. For any possible way a small set of features can interact, there are probably a large number of corresponding examples. Each of these is a candidate for a useful metaphor. Each captures the way the small set of features can interact.

Take the Bohr Model as an example. If we care about the outcome “stability of an atom,” there are many possible features we could investigate: position of the atom in the universe, duration of the atom, number of constituent elements, size of elements, mass of elements, velocity of elements, etc. Some of these matter (number, mass, velocity), and others do not (position in the universe, duration of the atom). The set of features that drives the outcome is small, and so finding another example where similar features drive the outcome may be fruitful. The solar system is one such example, but there could have been others.

Why not models?

Economic modeling and the utility of metaphors both rely on a small enough set of features and interactions for our brains to track.  However, unlike modeling, metaphors bring with them the baggage of a large number of other “irrelevant” features. So why do we bother at all with metaphors? Why don’t we leap straight to models of features sets, as we seem to in economics? Why do we add the extra confusion of a second real world example, and all it’s baggage of irrelevant features?

Again, I think economic modeling has some lessons. When using a model to make inferences, there are two different mistakes we can make:

  1. Our model can be internally inconsistent
  2. Our model can be internally consistent, but we choose the wrong one.

Metaphors limit our possible mistakes to the second case.

Economists use math to ensure their models are internally consistent. I suspect metaphors perform the same function. A metaphor based on real world phenomena must be internally consistent, if only because it’s happened in the real world. If I’ve encountered A, then all the features of A must be able to fit together without contradicting each other. The existence of A in the real world has proven those features and their hypothesized interactions are internally consistent. Going forward, if I use A as a metaphor for a new situation, it’s possible that I’ve chosen the wrong metaphor, but at least I haven’t chosen something that’s impossible.

Metaphors have a couple of other useful features. Models and scientific theories are built up from the orderly interaction of different assumptions. With the exception of computer simulations and other black box techniques, we usually understand exactly what is happening in the model. This is necessary to maintain internal consistency and helps us identify critical assumptions. But it’s also limiting. The need to keep models tractable imposes severe constraints on the kinds of assumptions we can make, at least in the case of economics.

In contrast, metaphors are possible to use without true understanding. We can be familiar with something (e.g., the human body, love, traffic jams) and use it as a metaphor without having a deep understanding of how it actually works. This vastly expands our set of tools for inference, but at the cost of making it harder to identify which features are deep/structural.

Second, the need to maintain internal consistency means economic modeling is done in the language of math. More generally, models are built from  the interplay of rule-like propositions. Again; this is useful to insure internal consistency, but is simultaneously limiting. There are many other regularities and interactions in nature that are awkward to express in terms of rules. Metaphor lets us encode information about these regularities.

The example of the predatory dinosaur, from above is one such example. We can make inferences about how it might act, even though it would be awkward (I don’t say impossible) to derive these inferences from a mathematical model.

tl;dr

To sum up: metaphors are useful because the outcome of many situations is most determined by the interaction of a small number of factors. There aren’t that many ways a small number of factors can interact, so there are frequently real-world examples we can draw on that exhibit similar underlying interactions. Using real world examples (instead of models) is useful because it (1) ensures our model in internally consistent, (2) lets us use examples even when we don’t understand exactly how they work, and (3) frees us from the awkwardness of writing everything up in rules/math/logic.

 

Rewarding replication in science

I recently attended a workshop on open science. Open science is about making scientific processes more transparent, data freely available, and papers viewable by all. Among the many potential benefits of an open science model is increased confidence in scientific findings. This is particularly relevant in the midst of the ongoing replication crisis in many fields. The crux of the replication crisis is that many famous findings turn out to be very difficult to replicate. This raises the worrying implication that the original findings were just noise or were methodologically unsound.

Researchers are adopting at least two complementary strategies to deal with the problem. First, methodologies are being tightened up. Second, there is an active effort to increase the number of replication studies.

Ahead of the conference, I spent some time with David Hull’s Science as a Process. I think Hull’s model of incentives in science can shed some light on how we got here, and also suggests some possible paths out. The rest of this post lays out Hull’s model, speculates how it can explain the emergence of the replication crisis, and offers two suggestions (one modest, one ambitious) for increasing the supply of replication studies.

Hull’s model of scientific incentives

Hull, a philosopher of science, developed his ideas after close observation of a small scientific community (taxonomists and cladistics). These scientists did not behave like disinterested rational truthseekers:

As it turns out, the least productive scientists tend to behave the most admirably, while those that make the greatest contributions just as frequently behave the most deplorably. [p.32]

For Hull, humans everywhere desire status. Scientists are no exception. He sees the genius of science as redirecting the selfish desire for status into the creation new knowledge. It does this by creating a “market” for ideas.

The currency of this market is citation (or more broadly, use and influence). To get “rich” you need to generate knowledge that is used (i.e., cited) by your peers. In doing so, you gain status in the community. But the only way to generate new knowledge is to “buy” support from other knowledge by citing it.

For this to work, there have to be some kind of property rights in the creation of knowledge. This is achieved by the priority system: whoever publishes first owns the knowledge. This doesn’t mean they can prevent others from accessing it. Indeed, the priority system enforces prompt disclosure. But it does mean subsequent citations of the knowledge accrue to its original discoverer.

As Partha and David (1994) point out, the priority system has some attractive features. Knowledge can’t be “unveiled” twice. Once you have it, you have it. The priority system nudges scientists to work on unsolved problems, because you get no credit re-proving known results. There’s also a practical reason for the priority system. In any system where credit is assigned to people who are not first, it would be possible to get some credit by reading the first publication, pretending you had the same idea, and writing a paper.

For example, suppose we split credit for discovery among all scientists working on a problem. After all, many scientists are frequently working on the same problem and it seems unfair to assign all the credit to whoever was first. This assigns a lot of weight to mere luck. However, such a system could be gamed by fast-follower scientists. Once an idea has been published, anyone can read it and have the idea. What’s to stop them from claiming they were working on it at the same time? (Remember, Hull does not assume scientists are paragons of virtue)

There’s a second attractive feature of Hull’s model. People are not very good at seeking out information that challenges their beliefs, but they are pretty good at pointing out flaws in others’ beliefs. Again, scientists are no different. Scientists are poorly suited to looking for holes and mistakes in their own work, but the citation market in science separates the task of generating and testing knowledge.

Why would a scientist verify the ideas of someone else? First, scientists can get their knowledge more broadly used by eliminating the competition. This requires tearing down rival theories and findings. Even human foibles like grudges and personal animosity can be leveraged into doing useful work in this model:

“…I think that the function of personal animosity in science is still an open question. It might after all serve a variety of purposes…  Scientists acknowledge that among their motivations are natural curiosity, the love of truth, and the desire to help humanity, but other inducements exist as well, and one of them is to “get that son of a bitch.”  [p. 160]

Second, if you want to generate knowledge that is used, you can’t build on foundations of sand. There is therefore an incentive to check the work you build on.

Bubbles in Science

This system has, on the whole, worked really well. We know more about the world than ever before, and we learned most of during the few hundred years that we had science. But it’s not perfect, and the incentives for replication are one shortcoming.

Replication work isn’t free. It takes a lot of time and effort to replicate a study, in some cases nearly as much work as the original study (there’s a reason it’s not part of the peer review process). I suspect the cost of research in general has gone up over the century (a topic for another post), and the costs of replication probably went up alongside it.

Meanwhile, what is the benefit of a replication?  Or, in the parlance of Hull, how does replication work enhance your status in the community? The benefits of replication are a bit more complicated than costs, because there are direct and indirect benefits. Let’s consider each in turn.

First, replication studies might be directly used and cited. However, because of the priority system, little credit is assigned to successful replications of earlier work. While some might cite the original finding and note it was successfully replicated, not all will. And very few will cite the replication without the original finding.

A replication that challenges the original finding will probably get more attention than a successful replication, if only because it’s a more surprising result. But I worry failed replications won’t be cited as much as they should be. To the extent they are evidence against particular theories, they (and the now discredited original finding) are less likely to be cited by work in that theory. They may be cited by rival theories who seek to discredit competition. But I suspect this only occurs while a field is contested. Once one theory is ascendant, harping on the null results of the defeated theory seems at best a waste of time and at worst punching down. In contrast, a positive original finding might be cited for decades or more. (As an aside, the use of null results seems likely to be influenced by similar issues)

Second, replication work indirectly affects citations through it’s impact on what knowledge gets used. A failed replication of finding x redirects subsequent citations away from x and potentially towards rival ideas. A successful replication of finding x indicates x is sturdy enough to build on and encourages citation of x. In either case, the benefit is only realized by the replicator if they “own” rival ideas to x (in the case of failed replication) or if they own x or related ideas (in the case of successful replication).

Compared to original research, it seems like Hull’s model undervalues replication work. A replication that takes nearly as much work as an original finding, for example, is unlikely to get as many citations as an original finding. For any activity, when costs are high relative to benefits, people will gravitate towards substitutes. What are substitutes for replication work? One such substitute is the reputation of the scientist. Those who have done good work in the past will be trusted without as much verification in the future. Another substitute is to freeride on the judgment of the community. If at least some people are still doing independent verification, then highly cited work is more trustworthy. Both substitutes for replication contribute to a Matthew effect. The rich get richer.

These kinds of dynamics seem prone to bubbles. Suppose a paper by a famous author is mistaken, but this is never discovered because replication would be too much work. Instead, the paper is cited based on the strength of the author’s reputation. As it accrues more citations, other scientists interpret this as an endorsement by the community. It’s citations are further increased. This in turn raises the profile of the original author. Their subsequent work (likely in the same area) is now more likely to be cited without verification. In this way, an entire research edifice may be built on sand.

When the bubble pops, you get things like the replication crisis.

Existing proposals to increase replication

So what to do? There is currently a big push underway to increase the supply of replications in science. The open science project helps by reducing the cost of replication. It tries to make the research process more transparent, and makes the original finding’s data freely available.

Other proposals I’ve heard try to make replications easier to publish. One proposal is for journals to accept or reject papers based on pre-registered research plans rather than results. Journals favor “surprising” results, which makes them loathe to accept both null results and successful replications. The proposal would force journals to accept or reject based on a methodological plan, before the outcome is known. Another recent journal accepts replications that were submitted to good journals, but rejected because the results were not surprising. To make the process painless, the journal would accept the peer review reports from the rejecting journal. A third proposal would commit journals to paying for replications of a random sample of accepted papers. Other proposals would integrate replications into graduate school training.

A modest proposal

I see no problem with trying these. But Hull’s model suggests the root of the problem is that replication work does not confer status. This is because it is not likely to be cited. My modest proposal is to increase the number of citations to replications by empowering journal editors to add them.

There is already precedent for this in an adjacent knowledge creation field. Patents also have to cite relevant “prior art,” whether in the form of other patents or not. Patent examiners can and do add citations to patents (omitted by the applicant) if they feel the citation is relevant. Today, these examiner-added-citations are indicated on US patents with an asterisk, but they are in every other way treated as a “normal” citation.

The format is not important, but I imagine this could look like figure 1. Replication studies are listed below the original finding, indented and perhaps accompanied by an asterisk (to indicate they were added by the editor).

citation exampleFigure 1. Possible editor-added replication citation format

Would these citations confer status? I think so. Just as citations to original findings bolster a paper’s support, so too do citations to replications of those findings. Moreover, just like every other citation in the paper, they would credit the replication author by name. Furthermore, to the extent that a scientist’s career is summarized by various citation metrics (h-index, total sum, euclidian norm), these citations would “count” just as much as the rest. And finally, to the extent that journals chase citations too, an increase in citations to replications would increase their willingness to accept replications for publication.

I think this would help. Over time, if enough replication happens, a new norm may emerge in science wherein replications are cited without input from journal editors (encouraging such a norm has been suggested by Coffman, Niederle and Wilson). But it’s not a perfect system either. Which brings me to a second proposal.

A not-so-modest proposal

One problem with the above, is it creates scope for replicators to “free-ride” on highly cited papers. Think, for example, of hordes of graduate students running the same code on the same data, both made available by the original authors of the finding. You could end up with dozens of “replications” that add little value. This issue could be mitigated by editor discretion and the difficulty in judging what papers will be highly cited. But a bigger problem is that the above solution does little to address the problems of replications that challenge the original finding. These failed replications often push scientists into completely different research areas. There are no papers left to add citations to.

A more ambitious proposal is to try to estimate the marginal impact of a replication study on the original finding’s citations. For clarity, suppose a paper’s citations can be expressed by a function c(f,t), where f is the set of paper features that impact citation and t is the probability that the paper is “true.” This function is estimated empirically, with t possibly corresponding to the probability a replication effort matches the original finding. I assume c(f,t) is increasing in t.

The value of a replication study is given by:

replication value = | c(f,t’) – c(f,t) |

where t’ is the updated probability of truth after a replication (using Bayes rule). The replication value is the absolute value of the change in the original finding’s citations induced by the replication.

This formula has a number of nice properties. Successful replications raise t and their value is equal to the increase in citations associated with the rise in t. Conversely, when the replication challenges the original finding, the original finding receives fewer citations and the replication is rewarded for directing research away from the area. In either case, the value is larger for original findings with feature sets f such that they are highly cited if true.

Second, if we use Bayes’ rule to update our estimate of truth, early replications move t a lot, and are rewarded accordingly (a sort of generalized priority system). It may also be possible to design ways of incorporating the quality of the replication (for example, bigger samples might count as better evidence), so that high quality ones have a bigger impact on t’. Along the same lines, if we can measure the correlation between different replication efforts, this could also be incorporated. Replications using the same dataset will likely have outcomes highly correlated with the original finding, and therefore provide less evidence (they move t’ less) than those that gather new data. All of this would serve to nudge scientists towards the most valuable replication work.

Would these replication value scores have any impact on scientist status? I admit this is less clear. When  c(f,t’) – c(f,t) > 0, the replication value could be used in conjunction with the first proposal to decide which replications to cite on any paper. In this way, it would really “get names on papers.” A more challenging case is when  c(f,t’) – c(f,t) < 0. In this case, authors may be hostile to having negative evidence added to their citations. Moreover, there may not be any papers citing the original finding that the replication can be attached to!

At a minimum, however, the replication value could be reported alongside other citation based indicators. Moreover, although I have not emphasized it in this post, citations are not the only way to obtain status in science. Professional recognition can take many forms: promotions, prizes, fellowships, appointment to the leadership of professional societies, etc. Replication value could be used to decide who gets recognized.

However, the main advantage of this approach is that it can be unilaterally implemented. There is nothing to stop a motivated individual with access to the right data on citations to estimate their own c(f,t) and begin posting the replication value of different studies on the web. The main difficulty is probably in coming up with an estimate of c(f,t) that is convincing to the research community.

But that’s why I called this proposal not-so-modest.

Autonomous Innovation with Neural Networks?

Earlier this year, an essay by Gary Marcus threw some cold water on the idea that humanity is on the brink of general artificial intelligence. Specifically, Marcus targeted Deep Learning, the method underlying modern neural networks. Marcus makes a series of critiques, but one prominent one (discussed earlier) is that neural networks have abysmal judgment for problems that lie outside the borders of their training set. This doesn’t mean they can’t be very good inside their training set. But give them a problem not well approximated by a mix of the features of examples they are trained on, and they fall to pieces.

On this (and other) dimensions, neural networks fare worse than human beings. But how important is this? In this post, I want to think a little about how far you can get with purely “interpolative innovation.” By interpolative innovation, I mean innovation that consists in the discovery of new things that are “mixes” of preexisting thing. A lot of innovation falls under this category.

A nice 2005 paper by economist Ola Olsson serves as a roadmap. Olsson’s “Technological Opportunity and Growth,” published in the Journal of Economic Growth (paywall, sorry), makes no mention of neural networks. It was meant to be a paper of how innovation in general happens. But it nicely illustrates how the dynamics of primarily interpolative innovation might play out in the long run, and how interpolative innovation can come to look like it evades Marcus’ critique: it (seemingly) breaks free of its training set.

The Technology Space

Olsson asks us to imagine a highly-dimensional space. Each of the dimensions in this space corresponds to some kind of attribute that an idea might have, and which might be measured by a number. Olsson suggests dimensions could correspond to things like “complexity”, “abstractness”, “mathematics”, “utility”, and so on. Scattered throughout this space are specific technologies, represented as points. Essentially, he asks us to imagine the human technological system as a cloud of points, where each point corresponds to a technology and a point’s position tells us about its features. Technologies with a lot of similar features are closely bunched together, and technologies with very different features will be distant.

Slide1Figure 1. Technology as a set of points… except with a lot more than two dimensions

This can be mapped into a neural network setting pretty easily. For a neural network to work with data, the features of the data need to be inputed to specific neurons as numbers. We can imagine those numbers correspond to positions along axes in Olsson’s technology space. Just as the set of technologies floats out there as a cloud of points in highly-dimensional space, so to does the set of training examples float out there in highly-dimensional space. Examples that have very similar features are close together, examples with very different features are far apart.

Innovation in the technological space

In Olsson’s model, all innovation is an interpolation between existing technologies. To begin, he defines incremental innovation as the discovery of a new technological point lying on a line connecting two technologies that are “close” together and which are already known. Essentially, we add new points to our cloud, but we can only add points in the spaces between existing technologies. However, as we innovate, we add new technologies, and these give us new possibilities for combination. If incremental innovation was all there was, then in the long run, we would eventually fill up all the gaps between technologies. In technical parlance, we would be left with the convex hull of technologies close enough to eventually be fully connected. The convex hull is the region such that no line drawn between points in it fall outside the region.

Slide2Figure 2. The convex hull of Figure 1. Again, pretend there are a lot more than 2 dimensions.

This is precisely Marcus’ critique of neural networks. They cannot extrapolate, and they cannot go beyond their training data. At best, they can recover the convex hull of their training set. Additionally, note that Olsson assumes it’s only possible to interpolate in between technologies that are already “close” in technological space. This is reminiscent of the way that neural networks need to be tuned to the kind of data they receive. For instance, the inputs to image classification neural nets differ dramatically from AlphaGo’s inputs, and the networks cannot transfer what they’ve learned in one domain to another (indeed, this is another of Marcus’ critiques). So we might imagine, in Olsson’s framework, that neural networks are only capable of interpolating between very similar (i.e., “close”) sets of technologies.

Olsson adds to his model the assumption that every once in awhile, purely by random chance, serendipitous discoveries are made. These are points in the technological space that simply appear on the scene. By chance, some of them will lie outside the convex hull described above. We can image these correspond to the incidents like Fleming’s lucky observation that a stray bit of penicillin mould had retarded the growth of bacteria in a petri dish. Or maybe they correspond to scientific anomalies in Thomas Kuhn’s sense.

So long as incremental innovation is feasible, researchers ignore these discoveries. However, at some point, incremental innovation begins to exhaust its possibilities. All possible combinations have been discovered. When this is the case, researchers turn their attention to these discoveries and engage in what Olsson calls radical innovation. He assumes this is riskier or more costly, and therefore the less favored choice of researchers. However, when incremental innovation is not possible, they have no choice but to turn to radical innovation.

Radical innovation is the creation of a new technology lying on a line between an existing technology and one of the serendipitous discoveries lying outside the convex hull. After much radical innovation, there are enough new technologies close enough to existing ones for incremental innovation to begin again. This time, the incremental innovation makes use of the technologies discovered by radical innovation. In this way, innovation proceeds, breaking free of its old paradigm by exploiting and developing serendipitous discoveries.

Again, this framework seems a good fit for neural networks. We can imagine radical innovation corresponds to retraining a neural network with additional examples that lie outside its previous training set. Just as Olsson assumes radical innovation to be in some sense harder than incremental innovation, retraining neural networks in this way seems harder than business as usual. In our last post, for example, it took several years before computer scientists figured out how to represent the styles of multiple painters in the same neural network. If we want to add new and distinctive examples to our training data, we may need to modify the network so that it can handle both the existing data and the new one. This kind of “radical” training set expansion is painful and time-consuming, but eventually enables neural networks to go to work interpolating new technologies between existing data and the new discoveries.

An Autopilot for Technological Progress

In Olsson’s model, incremental innovation proceeds as long as the paradigm remains productive (meaning all ideas in the convex hull have not been found). At the start of a paradigm, there are abundant opportunities for combining ideas. The returns to R&D are high. Over time, these ideas get fished out and the return on R&D falls. Throughout this period, random and serendipitous discoveries or anomalies are occasionally discovered, but they are left unexploited. As time goes on though, incremental innovation runs out (and this happens more quickly if there are more researchers). At that point, a period of difficult and groping R&D happens as firms engage in radical R&D. This requires interpolating between the known paradigm and the serendipitous discoveries. After some successes though, the convex hull is expanded and a period of incremental R&D starts anew.

Olsson meant to describe the cyclical nature of innovation by human civilization. But his model also provides an (admittedly speculative) blueprint for open-ended automated innovation by next generation neural networks. For as long as it’s valuable, the neural networks generate new discoveries by “filling in the gaps” of their training data with interpolations. Think of AlphaGo discovering better moves in Go, and style-transfer networks discovering new styles of painting, but also next generation neural networks discovering new molecules for drugs or material science, and new three-dimensional structures for buildings and vehicles (no links because I made those up).

And we could also automate non-incremental innovation. We could begin by programming the neural network to look for it’s own serendipitous discoveries.  In Olsson’s model, these come about by random luck. But in a neural network, we could program them to occasionally try something completely random (and outside the boundaries of their training set). This will almost never work. But on the rare occasions when the neural net tries something outside its training set that actually works, it can incorporate this information into it’s next training set. It can interpolate between this discovery and its existing ideas, and in this way it can escape the cage of its training set.

The barriers

For this kind of autonomous technological progress to work (at least!) two problems would need to be solved. The first we have already alluded to. Neural networks are quite domain specific. There is no guarantee they can even “see” examples that lie outside their training data, especially if different features of that data are what’s relevant. Maybe, we could build neural networks who are trained on the specific task of putting new data into a form usable for old neural networks… but we are well outside my area of expertise here.  I have no idea. In any event, maybe humans can do that task (not that it would be easy!).

The other barrier is the nature of the feedback a neural network would receive. Neural networks tune their internal structure according to well-defined goals, whether that goal is winning at Go or matching the style of a painter while preserving the content of an image. A neural network trained to deliver useful technologies would need to be trained on how valuable its discoveries are. How would that be determined? The answer is not so clear. In some cases, it’s relatively easy. If the neural network is generating new drugs, we can run clinical trials and see how they fare. But what if we’re developing polymers for material science or three dimensional structure? We can rate these discoveries on various criteria, but they may have unexpected and unanticipated uses. An alternative would be to let the market decide: after all, technologies that are profitable are ones that consumers will pay a lot for, relative to production costs, and this seems to be closely related to the value of an idea. But this solution is not without its own problems. For example, it might lead the neural networks to develop baubles for the super-rich.

I don’t intend to resolve this issue here. Indeed, how best to incentivize human innovators to focus their efforts where it is most valuable is an open question (and one I’ll explore in later posts)! But this need not distract us too much; my main point is to illustrate that it is at least possible for innovation to go a very long way, even if it’s primarily interpolative.

However, just because something is possible, doesn’t mean it’s a good idea. Might there be better ways to innovate? At the end of the day, neural networks are only one way to represent regularities in nature. In upcoming posts, we’ll discuss some of the others.

 

Neural Networks and Old Regularities

In my last post, I wrote about the possibility that neural networks can represent new regularities in nature. These regularities are impossible to concisely represent with the kinds of representations humans are comfortable with: chiefly rules, probabilistic statements, and metaphor. This can make their pronouncements seem eerie and magical. But neural networks (hereafter NNs) are nothing if not flexible, and can also represent old and familiar regularites. These are the type we can translate into easier-to-digest formats. And this is another avenue in which they can innovate.

To explore this, let’s talk about NNs copying famous painters.

For a variety of reasons, a lot of computer scientists are interested in teaching NNs to paint. Well, more precisely, how to generate images that apply a given artistic style to any photo. One of the early papers in this area is Gatys, Ecker, and Bethge (2015) (real “early” right?). In figure 1, they apply the style of Van Gogh’s Starry Night to a photograph of canal houses.

Figure 1 - row houses and starry nightFigure 1. Applying Van Gogh to an image via neural networks. Source: Gatys, Ecker, and Bethge (2015)

In contrast to the baffling genius of AlphaGo, this is a case where we can understand what the NN is doing. Copying the style of Van Gogh is not that mysterious. People do it all the time. Here’s a lovely painting of trees by Sylvie Burr.

Figure 2 - TreesFigure 2. Trees in the style of Van Gogh (creative commons, by Sylvie Burr)

This is not a case where regularities are mysterious and defy explanation. The regularities that characterize Van Gogh’s style include a texturing of thick swirling lines and a moody (rather than realistic) color palette. Show us examples of his style and even someone who has never seen his work will begin to pick out commonalities.

Neural Network Representations in images

NNs are also capable of representing those regularities. But, in an emerging theme, the way it represents those regularities is opaque. We can’t “tell” it what regularities characterize Van Gogh’s style. Instead, we give it lots of examples and let it rediscover these regularities on its own.

However, before we go on, we have to talk about a second set of regularities that a NN has to represent to transfer style. These are regularities in the content and perspective of images done in different styles. In Figure 1, we can tell that both images correspond to the same subject matter and view. They only differ in their styles. In contrast, figure 2 and the right-hand side of figure 1 have the same style (sort of), but clearly depict different subject matter. The NN has to represent both regularities in style and content.

It does this in different ways (I am drawing on this, and this for this section). For the computer scientists, “style” is understood as a form of diffuse texture. It is the curving lines and color palette of Van Gogh, not the composition and choice of subject (in this respect, they miss a lot). When they train a NN to match the style of a painter, the “style” of the image is converted into numbers corresponding to non-localized regularities over the whole surface of the image. For example, in figure 1, it doesn’t care much about making the left-hand side of the image dark (to match the mountain spire of Starry Night). Instead, it cares about matching the thick wavy lines and color palette of the entire image.  By assessing how much the numerical score of the NN’s style differ from that of the example, it can be evaluated. The NN’s weights, links, and thresholds (link to post 1) are tweaked in pursuit of this style target.

So much for style. To match the subject matter of a NN, the evaluation is done with respect to large-scale local regularities. In figure 1, it wants to see a recognizable sky on top, row houses in the middle, and water on the bottom.

Recognizing that two images are of the same subject and perspective, even if all their pixels are different, is closely related to the problem image classification. Image classification problems includes facial recognition (realizing two different images correspond to the same face) and labeling image content (as corresponding to, say, dogs, or cats, or “inappropriate content”). In all cases, we want to match regularities in the relative position of large chunks of visual image. For example, if we’re identifying faces, we might be comparing the relative size of the “nose” to the “mouth” and “eyes” (although actually we don’t know what the NNs are doing).

Identifying regularities in images with different styles is closely related to the image classification problem. So computer scientists actually borrow the NN representation of these image classification problems! In a literal sense, they start with the hidden layers of NNs trained on image recognition (including the nodes, links, weights, and thresholds of the NN) and simply copy them over to the style-transfer NN. And recall, we don’t really “understand” what the NN is doing to classify images. Here, the fact that we don’t “understand” means we struggle to translate what the NN is doing into metaphors and rules. But it’s not necessary for us to do that. The representation encoded by the NN still does what we want a representation to do: it conveys information about a regularity in nature. We don’t really “understand” the NN’s internal representation of the regularities in an image, but that doesn’t stop us from redeploying that representation in a new context. Does it reliably identify image content? Great, that’s all we care about!

Is this really innovation?

So, NNs are capable of representing regularities in image style and content in such a way that styles can be swapped and content retained. By my definition, this is an innovation: the NN has stepped into the unknown and exploited regularities to generate something a lot more “interesting” than a random collection of pixels. But it’s fair to say these innovations are not world-changing. Indeed, they can fairly be described as derivative. An artist who only copied other artists’ styles wouldn’t be described as innovative, even if he did apply those styles to new contexts.

This is related to a critique of NNs by NYU psychologist Gary Marcus:

In general, the neural nets I tested could learn their training examples, and interpolate to a set of test examples that were in a cloud of points around those examples in n-dimensional space (which I dubbed the training space), but they could not extrapolate beyond that training space. (p. 16)

Put another way, NNs are good at combining aspects of what they are trained on (the content of pictures, the style of painters), but they are always trapped in the cage of these examples. A NN trained in the above manner won’t ever take us beyond Van Gogh.

But this is not as much of a shortcoming as it seems. The ability to usefully combine aspects of disconnected things is, in fact, one of the fundamental creative acts. Indeed, Keith Sawyer (who wrote the book on creativity) defines innovation as “a new mental combination that is expressed in the world.” (pg. 7) I’ll briefly give examples in three different domains.

  • In art, borrowing and recombining ideas can be seen in super-literal forms. Think like Pride and Prejudice and Zombies, and mashup artists like GirlTalk. But it’s also there, just below the surface, in things that like Star Wars.
  • Earlier, I asserted all technologies are combinations of pre-existing components. The internal combustion engine is one clear example. The modern combustion engine is built from a dizzying set of components that were often pioneered elsewhere. To take two examples, crankshafts and flywheels together convert uneven back-and-forth motion of a piston into smoothly rotating energy. Crankshafts had previously been employed to transform the rotational motion in waterwheels and windmills into back-and-forth motion. And flywheels had long been used to give potter’s wheels smooth and continuous motion. (Dartnell, pg. 201-207)
  • Lastly, the product of sexual reproduction is of course a new organism that draws on a mix of genes from each of its parents. Over time, this mixing, matching, and selection generates entirely new species.

The difference between the above and what NNs are doing is a different of degree, rather than a difference of kind. Most of the innovation done by humans and nature is also bound by “the training space” of available examples.

Expanding the Training Set

The difference is that humans and nature have a vastly, vastly more diverse storehouse of training examples than NNs. It’s not possible for a NN trained to reproduce the style of Van Gogh to go beyond him, because the only examples of painting it has are those of Van Gogh. To develop a new style, it would need examples drawn from other styles of painting at a minimum. More importantly, to really generate something we’ve never seen before, the NN would need the capacity to interpolate between different styles. Is this possible?

Yes, and it’s been done. Dumoulin, Shlens, and Kudlur (2017) from Google Brain trained a single NN to transfer the styles of 32 different artists to new images. Because the same NN represented all these different styles, the NN is also capable of applying interpolations of their styles to images. Figure 3 is an example from their paper:

Figure 3 - Interpolative StyleFigure 3. Combining the styles of different painters (figure 7 from Dumoulin, Shlens, and Kudlur 2017)

In this figure, the style of Starry Night has been applied to a picture of Brad Pitt’s face in the upper left corner. Head of a Clown by Georges Rouault is the upper right style, The Scream by Edvard Munch is the lower left style, and Bicentennial Print by Roy Lichtenstein is the lower right style. In between we have interpolations between the different styles. Subsequent work by Ghiasi et al (2017) (a group that includes the same team as above) generalized these techniques to a much wider set of painting styles.

This work shows its possible for NNs to develop styles that did not previously exist. Are they any good? In the small set of examples given in Figure 3, I tend to like the interpolations between Rouault and Lichtenstein more than I like pure Rouault Pitt and pure Lichtenstein Pitt. But the main point of this post is simply to show NNs can innovate, even when they are using the kinds of regularities that humans are able to understand.

Now, what I am not claiming is that NNs can match humans in our ability to combine and interpolate between different ideas. Reading these papers, it’s clear that representing the different styles of painting in a NN was a major technical challenge that took considerable work to implement. Worse, their solution to this problem cannot be applied to problems and data different from the painting-style problem, at least without considerable modification (and maybe not even then). It is going to be a long time before a single NN can combine ideas and concepts from vastly different domains like us. But on the other hand, a lot of progress has been made in just 3 years.

 

Neural Networks and New Regularities

In my first post, I argued reality is full of regularities. Exploiting these regularities requires information about them that can be conveyed and communicated. I call this information representations, of which there are five categories: rules, probablistic statements, metaphors, neural networks, and instantiations. Today I want to talk about neural networks.

Neural Networks: A Very Basic Primer

If you are already familiar with neural networks, feel free to skip this section. This is a really simplified explanation, and I’m omitting a lot of detail . If you want to learn more, the chapter on neural networks in The Master Algorithm by Pedro Domingos is a recent overview. The medium series “Machine Learning is Fun!” is an excellent primer if you want to get your hands a bit dirty.

Neural networks were originally inspired by the structure of our brains. Somehow, these arrangements of biological matter are capable of thinking, computing, and learning. What is it about our brains that makes that possible?

Our brains are a network of neurons. Neurons are complicated creatures, but three of their most important features are:

  • They are connected to each other.
  • They can send signals of varying intensity (including negative or inhibitory signals) to each other.
  • They can turn more or less “on” as a function of the incoming signals

Neural networks jump off from there, replacing actual cells with simplified virtual counterparts. In a modern neural network, the “neurons” can belong to one of three layers: input, output, and hidden.

Input layers code for “features” of something. For example, if the neural network is designed to classify black and white images, each pixel in the image could be a feature associated with an input neuron. The brighter the pixel, the more “on” the input neuron. If the neural network recognizes speech, the intensity of sound waves at a certain frequency might be a feature. More intensity at that frequency might correspond to being “on” for one of the neurons. If the neural network is designed to play Go, the presence of a white stone at each point on the game board might be a feature. The neuron is “on” if a white stone is present at the input neuron’s associated point on the game board.

Output layers are how the neural network communicates with its users. If the network is an image classifier, there could be an output neuron for every class of photo. How “on” the neuron is could convey the confidence that the image belongs to that class. If the neural network is a speech recognizer, the output neurons could be text (words or letters). If the neural network is designed to play Go, the output neurons communicate the chosen move (there may be a neuron for every space, and the one the network would like to place a stone turns on).

Hidden layers lie between the inputs and output layers, and do the “thinking” of the network. In modern neural networks there can be many hidden layers of neurons. Together, they form a complex web of connections, each with potentially positive or negative weights, and with each neuron having a potentially different threshold for activation. The operation of a neural network is about the propagation of signals forward through the network. The input neurons corresponding to the data’s features turn on and send signals to connected hidden neurons. Each of those neurons adds up the strength of the incoming signals (which can each be positive or negative), and depending on how much the sum exceeds a threshold, turn more or less on. The hidden layer neurons then send signals to the neurons connected to them. This process propagates forward until some of the output neurons are activated, communicating the “thoughts” of the neural network.

That’s it. Nothing magical is happening. The propagation of signals at each step happens according to relatively simple rules. The big picture is complicated, but only because there are so many of these small steps, and because they build on each other.

A useful thing about neural networks is that you don’t actually have to set the connections, signal strengths, and thresholds. If you have enough data, there are algorithms that can do all that automatically. Give the neural network an example set of features and see what its output neurons say. If they are wrong, propagate back from the outputs adjustments to the signals and thresholds so that the network is more likely to make the correct identification the next time it sees similar features.

Do our brains really work this way? I certainly don’t sense neurons signalling to each other and adding up thresholds. To the extent our brains work this way, the hidden layers are unconscious and we are only conscious of the output. We see a face, and we experience the thought “oh, that’s Grandma” leaping into our head unbidden. But underlying this recognition is (possibly) a biological neural network trained from birth to identify and categorize faces. The light coming into our eyes gets shunted off to various input neurons, which then propagate that information through hidden layers until the correct output neuron (or set of neurons) lights up. At that point, I have the conscious thought “Grandma!”

Neural Networks and Representations

At a more abstract level, neural networks represent local regularities in the data they are trained on with the structure of the network (including its signal strengths and each neuron’s threshold). This turns out to be a very flexible way of representing regularities. Indeed, neural networks can represent regularities that are difficult to concisely express in alternative schemes such as rules, probabilistic statement, and metaphor.

There are two sides to this coin. On the one hand, neural networks are capable of representing regularities that simply can’t be concisely represented any other way. On the other hand, the very nature of these regularities is such that they defy translation into alternative forms of representation. There is no metaphor, rule, or probability that “explains” the decision-making of the neural network (except a rule that tediously describes the neural network’s complex inner-structure).

Take this excerpt from Part IV of “Machine Learning is Fun!”, which describes how to train a neural network to identify faces:

…the neural network learns to reliably generate 128 measurements for each [face]. Any ten different pictures of the same person should give roughly the same measurements.
…So what parts of the face are these 128 numbers measuring exactly? It  turns out that we have no idea. It doesn’t really matter to us. All that we care is that the network generates nearly the same numbers when looking at two different pictures of the same person.

The tradeoff is that neural networks allow us to learn regularities that can’t easily be represented in our favored modes, but at the cost of unintelligibility.

Neural networks are also capable of innovating, in the sense of exploiting these regularities to step into the unknown and make a choice. But since the regularities they exploit are foreign to us and our very way of thinking, the innovations they come up with may seem mysterious and almost magical.

This is demonstrated really well in the netflix documentary AlphaGo.

AlphaGo and Unknown Regularities

AlphaGo’s 2016 match against Lee Sedol, as captured in the documentary AlphaGo, is a great example of how neural networks can exploit local regularities we don’t understand. AlphaGo is a program designed to play the board game “Go.” The game is relatively straightforward: two players take turns laying stones on a 19×19 grid. The goal is to enclose more territory than the opponent. However, because the board is so large, the set of possible games is enormous. There are 361 possible positions for the first stone, 360 possible positions for the second, 359 for the third, and so on. There are over 5.9 trillion possible positions of the first five stones. This enormous space of possible games has long meant that brute force calculation doesn’t work very well, even for computers. Go has been played for so long, and by so many though, that a large number of local regularities related to the game have been identified.

AlphaGo does not rely completely on neural networks, but they are a prominent component of its programming. In 2016 AlphaGo faced off against Lee Sedol, a legendary Go player considered to be the greatest of the last decade. Throughout the five game match, AlphaGo made surprising moves that baffled commentators, but later paid off. Move 37 in game 2 is a wonderful illustration:

Commentator #1: Oh, wow.
Commentator #2: Oh, it’s a totally unthinkable move.
Commentator #1: Yes.
Commentator #3: The value… that’s a very… that’s a very surprising move.
Commentator #4: (chuckling) I thought it was a mistake.
Fan: When I see this move, for me, it’s just a big shock. What? Normally, humans, we never play this one because it’s bad. It’s just bad. We don’t know why, it’s bad.

Fan (a European Go champion) makes my point very well. Humans have played Go a long time, and they have internalized certain regularities (indeed, in this case, the regularity that this is just a bad move is known without being understood why!). AlphaGo is playing a move based on regularities unappreciated by human players. In fact, one of the creators later peers inside AlphaGo’s program and learns AlphaGo assigns a probability that a human would play this move at 1 in 10,000.

As the game unfolds, the brilliance of the move becomes clear. Lee Sodol later discusses this move:

Lee Sodol: I though AlphaGo was based on probability calculation and it was merely a machine. But when I saw this move, I changed my mind. Surely, AlphaGo is creative. This move was really creative and beautiful… This move made me think about Go in a new light. What does creativity mean in Go? It was a really meaningful move.

AlphaGo goes on to win the game, with move 37 eventually seen as a turning point.

I don’t the nature of the regularity that AlphaGo exploited. It might have been the kind of thing easily explained but something human players had simply missed for millenia. That strikes me as unlikely. It might have been something easy to explain (if AlphaGo had been trained to do so), but only exploitable if you have the capacities of AlphaGo (e.g., the ability to track dozens of positions in parallel). I prefer to believe it was something new: a regularity impossible to express in our favored forms.

Uncanny Genius

While neural networks are inspired by the brain, it’s not clear the extent to which our brains actually work that way. I don’t have the expertise to weigh in on this debate. But, to conclude, let’s assume the picture painted above is broadly applicable to the human brain. Doing so can provide a plausible explanation for the mysterious judgments of geniuses, when they simply intuit an answer with uncanny precision, unable to provide an explanation for their insight.

Earlier, I gave the example of how brain structures, organized like neural networks, could identify your grandma from a sea of faces. The interesting thing here is that we experience this as automatic. We just “know” that’s grandma, without access to the underlying categorization process in our own heads.

Our ability to just “know” Grandma’s face doesn’t strike us as particularly mysterious. But when well trained neural networks in our brains do less common things, it can seem mysterious and magical. An expert in a particular domain – mathematics, engineering, science – sees countless examples in their domain over a career. Each example trains their internal representation of the domain. Then, one day, facing a novel situation they just know what to do. And their inability to explain themselves leaves observers dumbstruck and in awe.

I’ll close with an example of this from Gary Klein’s study of firefighters (recounted in Superforecasting by Phillip Tetlock and Dan Gardner). An experienced firefighter commander is combatting a routine kitchen fire that is behaving a bit strange. The commander is seized with an uneasy feeling. He orders everyone out of the house. Moments later, the floor collapses. It turns out the true source of the fire had been the basement.

How did he know trouble was afoot? We can imagine the neurons and connections in the firefighter’s head were tuned by countless experiences with fire, until their structure encoded regularities in fires impossible to clearly express in rules, metaphor, or probabilities. Just like AlphaGo the commander was unable to explain how he knew to get out. He described it as ESP.

 

What is innovation? Fundamentals

 

“The thing that hath been, it is that which shall be; and that which is done is that which shall be done: and there is no new thing under the sun.” – Ecclesiastes 1:9 (King James Bible)

Ecclesiastes is wrong. New things happen all the time these days. Airplanes, iPhones, wikipedia, nuclear weapons, chemotherapy, and skyscrapers were new things. Evolution, quantum mechanics, general relativity, Marxism, and computer science were new things. Facebook, Amazon, SpaceX, Microsoft, and Walmart were new things. Star Wars, Harry Potter, Guernica, and YMCA (the song) were new things. Shakespeare would have been new to the cynical narrator of Ecclesiastes. And new things had been emerging for millions of years when Ecclesiastes was written. True, at the time it would have been hard to see novelty. But mammals, fish, insects, and multi-celled life were all new long before Ecclesiastes was new.

Did these things bear no resemblance to that which came before? Of course not. Do they resemble various antecedents? Certainly. Much of this series will explicitly explore these resemblances. But to say these are therefore not “really” new is to miss something vital.

This is a blog about the new things under the sun. It is a blog about innovation and the emergence of reproducible novelty.  This is the first in a planned series of posts answering the question “how does innovation happen?” This introduction sets the series up and lays out the way I think about innovation. With that foundation established, I hope the remaining posts in the series will stand on their own.

Defining innovation

By innovation, I mean something quite broad. For sure I mean to include new physical technologies, such as the iPhone. But I want innovation to also encompass a much wider class of human creation. By innovation, I also mean to include academic theories. And art. And I want to include new ways of organizing people – e.g., into armies, churches, and companies. And when I say innovation, I don’t even want to limit it to humans. Nature innovates in the proliferation of countless new species and life forms. And one day humans might outsource their innovation to artificial intelligence.

For the purposes of this blog, I adopt a big tent definition: innovation is the emergence of things that are novel, interesting, and reproducible. “Novel” and “interesting” are self-explanatory, albeit a bit in the eye of the beholder. In contrast, it may not be immediately obvious why reproducibility ought to matter in a definition of innovation. Reproducibility implies two main things.

First, it limits us to things that can, in principle, be classes rather than specific instances. If the thing is a complete one-off then it’s not an innovation, even if it’s new and interesting. For example:

  • The iPhone was an innovation because Apple can (and does) churn them out by the millions. Steve Jobs’ personal iPhone might be interesting (for example, a museum might want to buy it). It was new at one time. But we can’t make more of them, so it’s not an innovation. It’s an artifact.
  • The “ride-share platform firm” is an innovation with several examples (Uber and Lyft being most prominent). Uber itself isn’t an innovation because it’s a singular firm. An exact copy isn’t possible and wouldn’t be Uber… it would be a ride-sharing firm. But if Uber develops a new strategy that others can copy, at least in principle, then that new strategy would be an innovation.
  • The human species was an innovation when it emerged. Jeff Bezos is not an innovation because we can’t make more of him, even if he was new at one time and is an interesting individual.

So when I speak of innovation, I am talking about the creation of a new blueprint for a category of things. I’m not talking about a new and interesting singular instance. However, if I were to leave it at that, I would be leaving in a lot of things we usually don’t think of as innovation. Specifically, I would be leaving in anything new and interesting that happens repeatedly. Things like car crashes, volcanic eruptions, and sunsets.

This is where the second implication of reproducibility comes in. When I say reproducible, I mean the replication requires access to information embodied in the thing. It requires access to a blueprint, or, failing that, access to the thing itself. In this specific (idiosyncratic?) sense, car crashes, volcanic eruptions and sunsets are not reproducible. Instead, they are just things that happen whenevever physical conditions are right. This might be often, so that they happen repeatedly. But in each instance, there is no reference to earlier instances or blueprints. Reproducing a new thing means using information to recreate it without rediscovering it.

In this series, the primary objects of study will be organisms, technologies, organizations, and ideas. Sometimes the boundaries between these objects is fuzzy. Don’t get hung up on that. The distinctions aren’t really important, so long as they are new things under the sun.

What’s the challenge?

So innovation requires creating something new, interesting, and reproducible. Of these, the first two are the really hard part.

It’s not very hard to come up with something new. Kick a bucket of blocks, scatter paint on a wall, or string letters together in random gibberish. With enough stuff, almost any bit of disorder is an unprecedented configuration of atoms. Neither is it very hard to make things reproducible. If you make the incentives strong enough, a dedicated person can carefully place the blocks, paint the scattered pattern with a brush, or type out letters. Indeed, with enough care and description, lots of disorder can be reproduced. But if it’s not interesting, why would anyone want to?

“Interesting” things are rare in the universe. A blind leap into the unknown will frequently be new, and if you take good notes, reproducible. But a blind leap into the unknown will almost certainly fail to discover something interesting.

This intuition underlies a common (but misguided) argument against evolution by natural selection and for intelligent design (https://en.wikipedia.org/wiki/Junkyard_tornado): the chances a randomly swirling cloud of atoms will settle into the shape of DNA (much less a living cell) is lower than the probability of a tornado whirling through a junkyard and assembling a Boeing 747 by chance. As a basic premise, I think that’s probably right. And it seems to be a fundamental operating principle of the universe. A bunch of monkeys banging on keyboards won’t produce Shakespeare by chance. Well, unless you have a long time to wait and a lot of monkeys. And only children think you can invent something useful by randomly connecting disparate bits of technology.

Thus the challenge of innovation is finding a way to take a leap into the unknown and to find something interesting when you land. The key is, innovation is not a random leap into the unknown. Innovation is a considered leap into the unknown.

What guides our leaps is knowledge of regularities in the world.

A regularity is a pattern in reality. The laws of physics are regularities, but most regularities do not rise to this universal level. A regularity may only hold in your local environment. It may be temporary. It may be inconsistent. To be useful to an innovator, it just needs to be exploitable. With information about regularities, innovators do far better than random chance.

This will be clearer with some examples.

Regularities in technologies, organisms, organizations, and ideas

Let’s start with physical technologies. In The Nature of Technology: What it is and How it Evolves, economist Brian Arthur makes the observation that all technologies are built from sub-components. These sub-components are themselves composed of sub-sub-components, which are in turn composed of sub-sub-sub-components and so on. The F-35 jet, for example, is composed of:

the wings and empennage; the powerplant (or engine); the avionics suite (or aircraft electronic systems); landing gear; flight control systems; hydraulic system; and so forth. If we single out the powerplant (in this case a Pratt & Whitney F135 turbofan) we can decompose it into the usual jet-engine subsystems: air inlet system, compressor system, combustion system, turbine system, nozzle system. If we follow the air inlet system it consists of two boxlike supersonic inlets mounted on either side of the fuselage, just ahead of the wings. The supersonic inlets… (Arthur 2009, pg. 40)

And on and on until we arrive at some “raw element” of technology. Arthur argues that these fundamental elements are “captured natural phenomena.” I prefer the term exploited regularities. Arthur provides a non-exhaustive list of regularities (pg. 52) that are commonly exploited in technology (I have broken them up into bullets):

  • A fluid flow alters in velocity and pressure when energy is transferred to it (used in the compressor);
  • certain carbon-based molecules release energy when combined with oxygen and raised to a high temperature (the combustion system);
  • a greater temperature difference between source and sink produces greater thermal efficiency (again the combustion system);
  • a thin film of certain molecules allows materials to slide past each other easily (the lubrication systems);
  • a fluid flow impinging on a movable surface can produce “work” (the turbine);
  • load forces deflect materials (certain pressure-measuring devices);
  • load forces can be transmitted by physical structure (load bearing and structural components);
  • increased fluid motion causes a drop in pressure (the Bernoulli effect, used in flow-measuring instruments);
  • mass expelled at a velocity produces an equal and opposite reaction (the fan and exhaust propulsion systems).

Technological elements exploit these and many other regularities to do something interesting. In Arthur’s framing, the interesting thing technologies do is “fulfill a human purpose.” These elements are then combined with others, and built up into modules and assemblies and other subcomponents, which are themselves rearranged and recombined with others, until a set of interdependent regularities are coordinated and arranged to perform some complex desired task. The exploited regularities in a technology are like the disparate voices of instruments in an orchestra, coordinated by score and conductor to produce music.

It isn’t only the “atomic elements” of technologies that exploit regularities. Frequently, a suite of components collectively allows for new regularities to be exploited. For example, the atomic bomb taps into regularities about the behavior of highly concentrated Uranium isotopes. At a certain density, naturally occuring atomic decay can trigger a chain reaction, with the release of tremendous energy as a side-effect. Accessing this regularity, however, requires an entire mini-industry of other technologies to create the uranium and push its density past the critical point.

And so technologies fundamentally rely on the exploitation of regularities to do something interesting. But this applies to organisms as well as physical technologies. After all, in a sense, organisms are nothing but very complicated machines. They even share the hierarchical nature of technologies. Our bodies are built from a set of organ systems, which are themselves composed of differentiated tissues, which are themselves built from an army of distinct cell types, which are themselves assembled from a huge collection of molecular machines.

And these tiny molecular machines run on regularities in nature, just like human technologies. Life’s Ratchet: How Molecular Machines Extract Order from Chaos by Peter Hoffman provides an overview of the regularities exploited by the smallest units in our bodies. These regularities differ from the kind exploited by human technology, because they exist only at the tiny scale within a cell. Yet, in large numbers, carefully orchestrated, they produce us. Examples include:

  • electric charges of the same sign repel, opposite ones attract (used to “lock” proteins and molecular machines into stable configurations)
  • thermal energies are random (used to jostle molecular machines out of stable configurations)
  • at the nano-scale, thermal and electrostatic forces are approximately equal (used by molecular machines to move from one stable configuration to another)
  • events that individually occur with very small probability almost certainly occur at least once, given many chances (used by cells to ensure desirable small probability events happen at least once)
  • In a positive feedback loop, a small initial cause can have a large final effect (used by cells to amplify desirable events that happen rarely)

The regularities listed above have the character of universal natural laws. But organisms also rely on mere local regularities. For example, many plants cannot grow well when moved to new latitudes. They use the length of day as a clock to coordinate their growing cycles and moving them to a new latitude, where the length of day changes differently over the year, disrupts these signals. As an aside, note again that the large-scale organism exploits regularities (e.g., length of days over a year) that are not used by the individual molecular machines that comprise it. The sum is more than its parts.

And neither are organisms the only things that rely on local regularities. Arthur, for instance, points out that a whole host of technologies break down when moved into space. They require gravity to function as intended. More broadly, there is a rich literature in the economics of growth about “local technologies.” Most technologies are invented in wealthy countries, and are often less productive when deployed in poorer ones. Tacit in their successful operation are all sorts of assumptions about regularities in the prices of inputs, the knowledge of users, the tasks they will be asked to perform, and the availability of complementary goods and services.

Next, what we have said already about technologies and organisms can also be applied to human organizations. They too organize themselves in branches, divisions, phalanxes, units, and so on. And they too exploit regularities. It’s just that these regularities pertain to human society. More so than natural phenomena, the regularities in society are local. Here are a few regularities in human society, ordered roughly from the more to less universal:

  • humans are adept at learning to copy new tasks, if shown how
  • humans can work with attention to detail for 4-8 hour blocks
  • more of a good or service will be demanded if the price declines
  • human desires for specific goods and services are relatively stable
  • people will accept fiat currency as a unit of exhange
  • money can safely be invested and earn a 2% return
  • the electricity will only rarely go out
  • most consumers know how to use the apple app store

Lasly, art too, relies on regularities. Being “interesting” in art can mean being “thought-provoking”, “beautiful”, “novel”, and other things. Art relies on regularities at several levels. Audio-visual forms exploit regularities in physical laws to transmit sound and light. They may also exploit regularities in human psychology to elicit reactions in viewers.  For example, certain techniques in horror movies can regularly elicit feelings of tension and surprise in audiences. At another level, art can exploit cultural regularities. Certain symbols, motifs, and themes may be more or less recognizable to the audience.

I’ll stop the tour at this point. We’ve seen briefly how technologies, organisms, organizations, and art all exploit regularities. Some of these regularities hold fairly universally. Others are highly local. Later, I will assert that innovations in the narrative arts and knowledge itself rely on regularities. But first, we need to introduce one more concept: representations.

Representations and Regularities

Regularities are “out there” in nature, waiting to be exploited. But to reproduce an innovation, you need a way to communicate the regularity to the reproducer. A representation is information about a regularity that can be communicated and conveyed.

There are several kinds of representation. In this series I focus on five. Their borders are fuzzy, rather than sharp. In later posts, I’ll talk about each in more detail. Here I briefly introduce them.

Rules

This includes logical statements, causal assertions and quantitative equations. At bottom rules insist things are related in a certain, definite way. The discussion of regularities in the preceding section frequently used rules as representations. Examples included:

  • “a fluid flow alters in velocity and pressure when energy is transferred to it”
  • “electric charges of the same sign repel, opposite ones attract”
  • “more of a good or service will be demanded if the price declines”

Other examples of rules include:

  • Tuesday comes after Monday.
  • All mammals are warm-blooded, have hair of some sort, and mammary glands.
  • Every reaction has an equal and opposite reaction.
  • The pythagorean theorem.
  • E = mc^2
  • Light travels at 299,792,458 meters per second.

Humans seem to prefer expressing regularities as rules whenever possible. Rules are clear and straightforward. They can be chained together to form new rules. Alas, many of the regularities in nature cannot easily or concisely be expressed as rules.

Probabilitistic statements

In a host of domains definite ironclad rules don’t exist. Things “usually” work one way, or “sometimes” work. Some notion of uncertainty, randomness, and probability is necessary to communicate many regularities. Probabilty allow us to exploit regularities in domains where we lack the information or computational power to develop rules. We also used probablistic statements in our preceding dicussion of regularities. Examples included:

  • “technologies [invented in rich countries] are often less productive when deployed in poorer ones”
  • “events that individually occur with very small probability almost certainly occur at least once, given many chances”
  • “the electricity will only rarely go out”

Other examples of probablistic statements include:

  • There is 50% chance of getting heads when you flip a fair coin.
  • Hillary Clinton has a 70% probability of winning the 2016 presidential election.
  • Thunderstorms sometimes generate tornadoes.
  • Most people are less than 7 feet tall.
  • The probability of a given quantium state is equal to its amplitude squared.

Despite their utility, humans are often resistant to probabilistic thinking, and frequently compress probabilistic statements into rules (for example, many interpreted a 70% probability that Hillary Clinton would win simply as “Hillary is going to win”). The book Superforecasting by Philip Tetlock and Dan Gardner provides a good overview of human resistance to thinking probabilistically.

Nonetheless, rules and probabilistic statements are the preferred representation in the quantitative sciences. There are a whole host of rules and probabilistic statements (the field of statistics) that can be applied to them. We can use these meta-rules to combine, transform and derive new probalistic statements from old ones. But rules and probalistic statements are relatively new in the history of the universe. We now turn to older forms of representing regularities.

Metaphors and analogies

A fuzzier but even more widespread form of representation is the metaphor or analogy: this thing you don’t know so well is like this other thing you do know. Metaphors and analogies tie together bundles of attributes and properties. They allow us to map them from one setting to another that shares some of the same attributes and properties. There are some who argue metaphors and analogies are the basic structure of thought (Surfaces and Essences: Analogy as the Fuel and Fire of Thinking by Douglas Hofstadter and Emmanuel Sander is one example I hope to write about later).

I too have used metaphor and analogy to represent regularities in this post. Examples include:

  • “the challenge of innovation is finding a way to take a leap into the unknown and to find something interesting when you land.”
  • “like the disparate voices of instruments in an orchestra, coordinated by score and conductor to produce music.”
  • “After all, in a sense, organisms are nothing but very complicated machines.”

Innovation is not literally about jumping into strange places. But that is an idea we have some intuitions about and those intuitions can be tranferred. Technologies aren’t literally orchestras. But our intuitions about the scale and complexity of how an orchestra unifies disparate sounds is a useful intuition to transfer. There are differences between organisms and machines. But not for the purposes of the discussion at hand.

Metaphors are natural to us and capable of capturing complex regularities not easily expressed as rules and probabilistic statements. And they are useful for guiding our excursions into the unknown.  For example, someone who is familiar with dogs knows they generally have a bundle of certain attributes: hair, four legs, sharp teeth, etc. Suppose that person encounters an unknown animal with many of these same attributes, perhaps a wolf. If it starts growling at them, they can use the analogy of dog behavior to infer what might happen next (they might get bit). But the analogy gets less useful the farther the new example is from the category’s archetypes. Do lions growl before they strike? What about alligators? What about a man in a gorilla suit?

Neural Networks

The previous three types of representation are familiar to us, because they can all be conveyed in language, and this is the primary way human convey information to each other. But there are other ways to encode regularities as information. Our brains, for example, gives us intuitions and “feelings” we cannot always express in language. And babies and animals can learn regularities without language too.

It turns out regularities can also be represented in the structure of special kinds of interconnected networks. The brain probably operates, at least in part, on these principles. Even if does not, recently, this method of representing regularities has been highly successful in artificial intelligence. We will come back to this form of representation, because it is not easy to describe succinctly if one is not already familiar with the concept.

Instantiations

Finally, we come to the oldest, simplest, and most robust way of representing regularities: as an instantiation of whatever is exploiting the regularity.

For example, suppose I discover an alien technology operating on principles completely foreign to me, and lacking any explanatory text. I may still be able to reproduce it by exactly replicating the technology (atom by atom if necessary). More prosaically, there are frequently techniques and processes that are very difficult to convey in language. For example, teaching someone how to shoot a basketball is probably much easier to do by demonstration than by any form of written or verbal communication. Complicated lab techniques may also need to be demonstrated rather than communicated. It may even be that the performer can’t explain why this technique works. They simply know “if you copy this technique, it will work.”

This is the type of representation most frequently used by nature itself. Nature stores the regularities it has discovered in the physical design of its organisms, who pass on instructions about how to replicate themselves, rather than instruction about the regularities they exploit.

Narratives and Knowledge; Representations, and Regularities

We now return to knowledge and narratives, which I earlier asserted also rely on regularities. Let’s begin with knowledge. When we think of innovation in knowledge, I am essentially referring to new explanations and theoretical constructs, rather than the documentation of new phenomena. For my purposes, “explanations” are essentially a more informal form of “theories” (the kind of thing often laid out in an academic article or book).

The unusual thing about explanations and theories is that the raw materials that they are made from are representations themselves. Explanations and theories in their simplest form essentially are representations. In more complex forms, they are large sets of ordered and interacting representations. They deploy rules, probabalistic statements and metaphor to convey information about regularities.

Inventing new explanations and theories is a matter of discovering new representations, or more likely, combining previously known representations in novel and interesting combinations. What makes an explanation or theory insteresting? Here, being “interesting” might mean the explanation “allows one to make predictions about the world” (the classic Popperian model of a good theory). But it could also mean the explanation is “plausible,” “provocative,” “beautiful,” “thought-provoking,” “widely applicable”, and so on.
Moreover, just as a random collection of technological components is unlikely to do something interesting, a random collection of representations is also unlikely to be interesting. Interesting explanations and theories exploit meta-regularities about the relationships between different representations. The rules of logical inference, for example, form a set of meta-regularities about representations. One of these meta-regularities, represented as a rule, might say “If you have a representation of the form ‘if A then B’ and a representation of the form ‘if B then C’, then the representation ‘if A then C’ corresponds to a regularity as well.” Mathematical and statistical operations of all stripes fall under a similar meta-regularity. Other examples inclue:

  • Occam’s Razor: the simplest representation is most likely to best express a regularity
  • Accordance with data: representations that generate predictions that match data are more likely to accurately represent regularities.

Finally, what of narrative art? As noted, to be interesting in art can mean a lot of things: “thought provoking”, “revelatory”, “beautiful”, “thrilling”, “surprising”, etc. Narrative arts can use a variety of techniques to elicit these responses, but a plausible match with regularities in the world forms a unique aspect of narrative art. Do the character’s motivations “make sense” in the context of the world? That is, do the character’s actions and thoughts align with the audience’s pre-existing internal representations about how people act? In some settings, a book explores unfamiliar settings, in which the audience knows few regularities to judge the plausibility of the narrative. In these kinds of works, sometimes the interest is the uncovering of new regularities in the fictional world. It is a regularity in our nature that we enjoy discovering regularities! (Indeed, I’m banking on that to find an audience for these posts!)

A Theory of Innovation

To summarize.

Interesting things tend to be interesting when they are hierarchical systems of interacting components that exploit regularities in the world. Creating new interesting things (innovation) requires leaping into the unknown and assembling a new system. If we had nothing to guide us, these leaps would rarely pay off. Innovation is possible largely because it uses regularities about the way the world works to guide its leaps. It assumes the regularities that hold in the world we know will continue to hold in the unknown (a leap of faith not always validated!). Innovators do not directly perceive regularities, but rather representations of them. Representations can take several forms, and I focus on five: rules, probabilistic statements, metaphors, neural networks, and instantiations. Finally, representations can themselves be woven together into hierarchical systems called explanations and theories. These “innovations” are themselves interesting when they exploit meta-regularities about representations.