Categories
Uncategorized

How Minerva University Teaches Habits of Mind

Minerva is a new university best known for its global campus. Students spend their first year in San Francisco, and then each class moves en masse through Seoul, Hyderabad, Buenos Aires, Berlin, London, and Taipei over the remaining six semesters. During my undergraduate years, studying abroad was an important experience for me, so I’ve been interested in Minerva since I heard about it a few years ago. Earlier this year I read their semi-official book Building the Intentional University: Minerva and the Future of Higher Education (eds. Stephen M. Kosslyn and Ben Nelson). But after reading the book, I’ve come to think the global campus is a bit of a sideshow. Minerva’s really innovative idea is actually in the considerably less flashy domain of curriculum design.

In this post, I’ll briefly explain why I think their curriculum is so interesting and what problem it is trying to solve. But up front I want to make it clear that I don’t work for Minerva or even know anyone who does. I just read their book, some other books, and teach at a university myself.

Minerva’s curriculum is designed to foster certain habits of thought. These are habits we would associate with critical reasoning, problem solving, communication, leadership, and teamwork skills. So far, so bland. What curriculum isn’t designed to foster good habits of thought? But as we will see, most universities do a fairly poor job of teaching these skills. The difference may well be that where most universities are laissez-faire about how they teach good habits of thought, Minerva is intentional.

Before turning to the Minerva approach, it’s necessary to take a minute to establish the disappointing track record of the traditional approach, and to suggest some reasons why it has disappointed. This is all laid out quite well in Bryan Caplan’s The Case Against Education: Why the Education System is a Waste of Time and Money.

What Does Education Do?

Caplan’s book is about explaining why an extra year of education is correlated with a 10% increase in wages. The traditional explanation for this fact is that education builds valuable knowledge and skills, called “human capital”, which makes graduates more productive employees. Caplan’s central argument is that education is primarily (80%) about signaling, rather than the accumulation of useful skills and knowledge. By “signaling,” Caplan means that the primary purpose of education is to certify students as smart, diligent, focused, submissive, and conforming to social conventions. This basket of characteristics is highly desirable to employers and employees with these characteristics are compensated with higher wages. The key assertion is that education merely certifies these traits; it does not build them.

While he makes several arguments that signaling rather than human capital explains the educational wage premium, the most convincing to me is simply the observation that higher education does not appear to build many skills useful to employers.

I think of myself. I did a double-major in Physics and Religious Studies for my undergraduate degree and was a very strong student. For the most part I loved the experience. But when I look back at my transcripts, I count five classes for which I would struggle to remember more than a sentence-worth of content. I count four more classes that I don’t even remember taking. As for the rest, while I often found the material interesting, I think the professional applicability of the specific facts, models, and tools I learned is quite limited. Indeed, I spent two years as an economic analyst after graduating and never used any physics (actually, the religious studies mattered more!). This is not to say nothing was useful. But Caplan’s estimate that only 20% of what we learn in school directly contributes to work productivity seems on the face plausible.

And I don’t think my experiences are that unusual. Naturally, there are majors and courses that build a lot of skills and knowledge directly relevant to certain career paths. But as Caplan points out, even here the human capital story faces challenges, because students frequently fail to get jobs in the field closest to their major. Few science majors become scientists, few psychology majors becomes psychologists, few economics majors become economists and so on. And then there are all the other majors that do not even try to build skills for specific professions. This is not to say such degrees are a bad idea (wages aren’t everything!), only that the human capital story as an explanation for the positive education-wage correlation looks weaker when we look more closely at what is studied. (Caplan also argues education does a bad job of building wisdom and other non-pecuniary forms of human capital, but I’m not going to go into that).

Deeper Knowledge?

The natural retort to this line of argument is that the specifics of curriculum do not matter because education develops deeper capacities. What is “really” taught are habits of thought: how to construct and evaluate arguments, how to weigh evidence, how to communicate effectively, etc. Whether we study English, economics, engineering, or entomology isn’t particularly relevant. They just give us the raw material for us to practice and hone critical thinking and communication skills.

For Caplan, this is wishful thinking. There are a host of studies that look for domain general forms of learning, and the results tend to disappoint. Caplan describes a study measuring the quality of arguments and reasoning given by students at different points in their education. Fourth year undergraduates did no better than first year undergraduates, and fourth year graduate students did only marginally better than first year graduate students. Another study looks at how students apply statistical reasoning to real life examples, again finding the vast majority fail to do so.

Another way to think about this issue is through the lens of knowledge transfer. It may not matter that people retain specific facts and surface details as long as they do retain the deep structure of theories and ideas. They can then apply these deep structures to contexts with different surface details. Unfortunately, a body of work by educational psychologists finds that people do not naturally transfer models and ideas from one domain to another. Most of the things we learn get “stuck” in the context in which we learn them. If we don’t use those skills frequently, they fade away within a few years. This is the fate of most of what we learn in school.

The problem is this: students tend to learn only what will be on the test and domain general thinking is not on the test. Again taking myself as an example, I grade my students on how well they do in my class; I do not follow them out into the world (or even into other classes) and adjust their grade based on how well they apply the models I teach them in novel contexts. The end result is that knowledge remains bottled up in the context of the specific domain it was learned and unless a student builds explicitly on that domain, it is lost through lack of use.

Reorganizing Undergraduate College

Caplan’s solution to this problem is to scale back the signaling arms race by cutting education subsidies and to expand curriculum that explicitly teaches skills employers want – namely, vocational training. Minerva, in contrast, thinks it can actually teach those habits of thought that we want college to develop. But to do that, it recognizes that it needs to change the way college is traditionally organized. It does this in three main ways:

  1. The curriculum explicitly teaches and assesses domain-general habits of thought
  2. It teaches these habits of thought in varied contexts
  3. It assess student mastery over all four years of education

Let’s take these in turn.

The curriculum explicitly teaches and assesses domain-general habits of thought

Today’s universities spend the majority of their time explicitly teaching and assessing students on their mastery of domain-specific content. We trust this effort will also lead to the development of domain-general critical thinking and communication skills as a useful by-product. For example, if you take an economics class, you are taught economics and assessed on your command of it. However, to the extent mastering economics requires deeper critical thinking skills, you will pick those up too.

Minerva flips this around, and explicitly teaches and assesses students on their mastery of 100 domain-general “habits of mind.” When I say “explicit” I mean it; the 100 habits of mind are enumerated and spelled out in appendix A of Building the Intentional University. Course content is selected to illustrate and practice the use of these principles.

What are these habits of mind? Examples include:

  • Evaluate whether hypotheses lead to testable predictions.
  • Identify and minimize bias that results from searching for or interpreting information to confirm preconceptions.
  • Apply effective strategies to teach yourself specific types of material.
  • Tailor oral and written work for the context and the audience.

Students are assessed on their successful use of these and the other 96 habits of thought both through coursework and during classtime. Classes are delivered online with students participating via webcam, and so there is a video record of all class discussions and activities. Classes are designed so that no more than 25% of class time is spent passively absorbing material, meaning students spend the majority of class time on activities in which they can practice or demonstrate habits of thought. Typically a few habits will be emphasized per week. After classes, professors go back through the footage and assess students’ use of habits of mind, offering feedback to students.

If you want to teach deep thinking habits, it’s probably best to explicitly teach them, rather than to trust they will be extracted from course content. In Caplan’s discussion of the literature on critical thinking skills, he describes a study where students were taught a solution concept either explicitly as an algebraic technique, or in the guise of a structurally equivalent physics technique. Students were then asked to solve a problem using the same solution technique, but in the guise opposite to what they learned (so algebra students solved a physics problem and physics students solved an algebra problem). Most of the students who studied the algebraic version of the technique used it on the physics problem, but few of the students who studied the physics technique used it on the algebra. It seems to be easier to transfer knowledge from the general to the specific than vice-versa.

It teaches habits of thought in varied contexts

While Minerva teaches and assesses domain-general “habits of mind,” some concrete context is still necessary. You can’t really teach someone how to “evaluate whether hypotheses lead to testable predictions” in some kind of abstract Platonic ideal. All these habits of thought are instead practiced in the context of more traditional course content. The problem is that anytime you introduce context, you run the risk of trapping the habit of mind in that specific context. As noted above, far transfer is hard. If you practice the “testability” habit of mind in a psychology class, it might never occur to you to use the same habit in a business context.

To address this problem, all Minerva students take the same courses in their first year. These courses give students a foundational liberal arts and sciences education. This naturally provides many different contexts for students to practice the same habits of thought, helping prevent these habits from getting trapped in the context where it was first taught. And in all of these courses, students are graded on their mastery of these 100 habits of thought.

Because of this scheme, far transfer itself becomes a skill you learn to cultivate and practice. You will learn a habit of thought in one class and be graded on your ability to use it in a different one. I can’t grade a student on her ability to apply the concepts I teach her in her other courses, but in Minerva something like this is the norm. This is only possible because the first year curriculum and assessment criteria are collaboratively developed by the Minerva faculty.

It assess student mastery over all four years of education

A final challenge to learning these habits of mind is we quickly forget what we don’t use. To avoid this fate, Minerva adopts an unusual grading system. As noted above, students are graded in their first year on their mastery of 100 habits of thought. But this grade is retroactively adjusted over the following three years, based on how well students continue to use habits of thought. Rather than learning something for a semester and then forgetting it, students must maintain (or try to improve) their first-year grades by continuing to demonstrate mastery over these habits. Again, something like this is only possible when the curriculum is collaboratively developed by the entire Minerva faculty. It only works because professors teaching later courses “buy-in” to it.

Can Existing Universities Adopt these Ideas?

To summarize, Minerva tries to teach habits of thought by explicitly teaching them, assessing them, and then forcing students to practice far transfer and retention by requiring the use of the habits across all classes for four years. I think this is all pretty cool, and if it works it’s exciting to think we have found a better way to teach domain-general thinking.

However, it works for Minerva because curriculum design has become a collective rather than an individual endeavor. It’s hard for existing institutions to completely adopt this format, since professors have so much autonomy in what and how they teach. But to close, I’ll toss out a few ideas for how a traditional university might experiment with some of the same concepts ideas.

One incremental step could be pairing up complementary classes, who then share assessment. For example, a writing class could pair up with a philosophy class where students have to write essays. Students would learn explicit habits of thought – in this case about writing – in one of the classes, and practice that knowledge in another. The essays would give students a specific context in which to practice their writing skills, and could be jointly graded by both instructors: the philosophy instructor grading for philosophy, the writing instructor for writing.

A step beyond this would be the creation of an explicit “habits of thought” class that is taught every semester. Like Minerva, it would explicitly teach and assess mastery of habits of thought. But the class could be structured so that students are assessed on how they use these habits in their other classes, which would be taught in traditional ways. For example, perhaps students write reflections at the end of each day on how they used the habit of thought in their other classes. Or perhaps students taking the habits of thought class are organized into small groups of students who are taking the same classes, and these could be a forum for discussing how the habits of thought were applicable (or not).

A further step would be the creation of a “habits of thought” minor with its own set of core classes organized much like Minerva’s first year. This minor would lock all students into a required set of classes in their first year, but these courses could be carefully honed so that students can slip into the second-year stream of traditional majors. For example, a student could become an economics student with a minor in “habits of thought” by taking a foundational year that covers a broad set of classes satisfying normal general education requirements. Ideally, these classes would also include enough of the material taught in introductory economics courses so that students could jump into an intermediate course in the following year. The minor could continue to meet throughout the next four years to keep the habits of thought in practice as well.

Categories
Uncategorized

2018: My Year in Books

Between the new baby, new house, and new job, I didn’t read as much this year as last year. I made it through 45 books. Of those, 8 were plays by Shakespeare. If I ever read all of his works, I’ll make a post about them. But here’s a rough ranking of the remaining 37, with a sentence on each.

Caveat: I liked all of these. I abandon books I don’t like.

Conceptual Non-Fiction

  1. The Second World Wars by Victor David Hansen: Wonderfully organized opus on World War II as a contest of national productive and organizational capacity.
  2. The Book of Why by Judea Pearl and Dana Mackenzie: Correlation can imply causation, (when you combine data with models).
  3. The Enigma of Reason by Hugo Mercier and Dan Sperber: Reason as a social influence tool that, as a bonus, helps you figure out how the world works.
  4. Old Masters and Young Geniuses by David Galenson: Aesthetic innovation comes from either evolutionary processes (tinker and evaluate) or reason (plan and execute).
  5. The Son Also Rises by Gregory Clark: Social status is really sticky across generations.
  6. The Measure of Reality by Alfred W. Crosby: Between 1250 and 1600 in Europe, numbers and measurement colonized new domains, potentially setting up our current paradigm of continuous technological progress.
  7. On Writing by Stephen King: Moving story of King’s own entry into the writing life, and his intuitive story-first method of writing (he’s an evolutionary creator).
  8. The Great Leveler by Walter Schiedel: Final two sentences sums it up; “All of us who prize greater economic equality would do well to remember that, with the rarest of exceptions, it was only ever brought forth in sorrow. Be careful what you wish for.”
  9. How to Read a Book by Mortimer J. Adler and Charles Van Dorn: Very useful, but I need to read it again correctly.
  10. Radical Markets by E. Glen Weyl and Eric Posner: A feast of ideas to wrestle with. (related post)
  11. Cognitive Gadgets by Cecilia Heyes: Pushing cultural evolution even farther; the ways we think and learn are themselves cultural products.
  12. Surfaces and Essences by Douglas R. Hofstadter and Emmanuel Sander: I wasn’t a fan of the presentation, but it’s an impressive feast of ideas to wrestle with. (short review)
  13. The Allure of Battle by Cathal J. Nolan: Using the history of (mostly) Western war to argue warmakers endlessly underestimate the cost and duration of their wars (a good companion to my #1 pick). (related post)
  14. Capitalism without Capital by Jonathan Haskel and Stian Westlake: Goes a long way towards explaining several contemporary economic puzzles.
  15. Misbehavin’ by Richard Thaler: How to shift a paradigm.
  16. The Disruption Dilemma by Joshua Gans: It’s more complicated than Christensen claims.
  17. Free Innovation by Eric Von Hippel: Neat little book on innovation outside the market system (also, it practices what it preaches).
  18. Improbable Destinies by Jonathan B. Losos: Evolution happens faster than you think.
  19. Zero to One by Peter Thiel: Efficiently communicates a lot of original ideas.
  20. The Hungry Brain by Stephan Guyenet: Good overview of the brain and will convince you that dieting is complicated.
  21. The Lean Startup by Eric Ries: More like “The Lea(r)n Startup.”
  22. Theory and Reality by Peter Godfrey-Smith: Very good overview of the basics of philosophy of science.
  23. True Stories and Other Essay by Francis Spufford: Probably for Spufford fans only (I count myself one).

Fiction and Narrative Non-Fiction

  1. Spring by Karl Ove Knausgaard: A left turn in the seasons quartet that completely recontextualizes Autumn and Winter.
  2. The Dispossessed by Ursula K. LeGuin: A masterpiece of worldbuilding, exploring how an anarchist society would work.
  3. The Worst Journey in the World by Apsley Cherry-Garrard: Who knew Earth could be so inhospitable to its children?
  4. Winter by Karl Ove Knausgaard: See the world new again.
  5. The Age of Innocence by Edith Wharton: The impossibility of shrugging off society’s constraints and retaining its gifts (a reread).
  6. Educated by Tara Westover: A stew of ideas – gaslighting, abuse, cultural immigration, the limits of familial reconciliation, and the founding of religions.
  7. Pet Sematary by Steven King: A heatseeking missile to the heart of this new parent.
  8. The Fifth Season by N.K. Jemison: Exploring grief and prejudice in a geologically active fantasy world.
  9. Bad Blood by John Carreyrou: Riveting account of Theranos’ rise and fall.
  10. Golden Hill by Francis Spufford: The pre-revolutionary war New York is a wonderful setting.
  11. The Player of Games by Iain M. Banks: Explores similar terrain as the Dispossessed.
  12. White Darkness by David Grann: The strange siren call of the Antarctic.
  13. The Everything Store by Brad Stone: Solid story of the rise of Amazon.
  14. Louis Riel by Chester Brown: Nuanced story of a complicated revolutionary.
Categories
Uncategorized

Why is knowledge transfer hard in neural nets but easy with metaphor?

Neural networks (NNs) and metaphors are both ways of representing regularities in nature. NNs pass signals about data features through a complex network and spit out a decision. Metaphors take as a given that we know something, and then assert something else is “like” that. In this post, I am thinking of NNs as a form of representation belonging to computers (even if they were initially inspired by the human brain), and metaphors as belonging to human brains.

These forms of representation have very different strengths and weaknesses.

Within some narrow domains, NNs reign supreme. They have spooky-good representations of regularities in these domains, best demonstrated by superhuman abilities to play Go and classify images. On the other hand, step outside the narrow domain and they completely fall apart. To master other games, the learning algorithms AlphaGo used to master Go would essentially have to start from scratch. It can’t condense the lessons of Go down to abstract principles that apply to chess. And it’s algorithms might be useless for a non-game problem such as image classification.

In contrast, a typical metaphor has opposite implications: great at transferring knowledge to new domains, but of more limited value within any one domain.  Anytime someone tells a parable, they are linking two very different sets of events in a way I doubt any NN could do. But metaphors are often too fuzzy and imprecise to be much help for a specific domain. For instance, Einstein’s use of metaphor in developing general relativity (see Hofstadter and Sander, chapter 8) pointed him in the right direction, but he still needed years of work to deliver the final theory.

This is surprising, because at some level, both techniques operate on the same principles.

Feature Matching

Metaphor asserts two or more different things share important commonalities. As argued by Hofstadter and Sander, one of the most important forms of metaphorical thinking is the formation of categories. Categories assert that certain sets of features “go together.” For example, “barking,” “hairy,” and “four legs” are features that tend to go together. We call this correlated set of features the category “dog.” Categories are useful because they let us fill in gaps when something has some features, but we can’t observe them all.

This kind of categorization via feature tabulation was actually one of the first applications of NNs. As described by Steven Pinker (link to How the Mind Works), a simple auto-associator model is a NN where each node is connected to another. These kinds of NNs easily “fill in the gaps” when given access to some but not all of the features in a category. For example, if barking, hairy, and four legs are three connected nodes, then an auto-associator is likely to activiate the nodes for “hairy” and “four legs” when it observes “barking.” Even better, these simple NNs are easy to train. And if such simple NNs can approximate categorization, then we would expect modern NNs with hidden layers to do that much better.

Now, as I’ve argued elsewhere, proper use of metaphor isn’t as simple as matching features. The “deep features” of a metaphor are the ones that really matter. Typically there will be only a small number of these, but if you get them right, the metaphor is useful. Get it wrong, and the metaphor leads you astray.

But this isn’t so different from NNs either. NNs implement a variety of methods to prune and condense the set of features, almost as if they too are trying to zero in on a smaller set of “deep features.”

  • Stochastic gradient descent (a major tool to the training of NNs) involves optimizing on a random subset of your data in each period, rather than all the data. In essence, we throw some information away each iteration (although we throw away different information each time). Now, this is partially done to speed up training times, but it also seems to improve the robustness of the NN (i.e., it is less sensitive to small changes in the data set).
  • Dropout procedures involve randomly setting some parameters to zero during the optimization process. If the parameter isn’t actually close to zero, the optimization will re-discover this fact, but it turns out you get better results if you frequently ask your NN to randomly ignore some features of its data.
  • Information bottlenecks are NN layers with fewer nodes than the incoming layer. They force the NN to find a more compact way to represent its data, again, forcing it to zero in on the most important features.

So, to summarize. Using metaphor involve matching the deep features between two different situations. NNs are also trained to seek out the “deep features” of training data, the ones that are most robustly correlated with various outcomes. So why don’t NNs transfer knowledge to new domains as well as metaphors?

What are the Features?

It may come down to the kinds of features each picks out. As discussed in another post, the representations of NNs are difficult (impossible?) to concisely translate into forms of representation humans prefer. It’s hard to describe what they’re doing. So we can’t directly compare the deep features that a NN picks out and compare them to the deep features we humans would select.

However, image classification NNs give us strong clues that NNs are picking up things very different from what we would select. There is an interesting literature on finding images that are incorrectly classified by NNs. In this literature, you start with some image and you tweak as few pixels as little as possible to fool the NN into an incorrect classification. For example, this image from the above link is incorrectly classified as a toaster:

Figure 1. Fooling image classification neural networks (source)

How can this be? Whatever the NN thinks a toaster looks like, it’s obviously different from what you or I would think. The huge gap between the deep features we identify and those identified by a NN are best illustrated by the following images from the blog of Filip Piękniewski.

Figure 2. Filip Piękniewski trained a NN to tweak gray images until they were classified with high confidence (source)

Filip starts with gray images and trains a NN to modify pixels until a second NN gives a confident classification. The top left image is classified as a goldfist with 96% probability. The bottom right is classified as a horned viper with 98% probability. The results are kind of creepy, as they highlight the huge gulf between how “we” and NNs “see.” Even though metaphor and NN both involve zeroing in on the deep features of a problem, the features selected are really different.

Different Data, Different Features

[Warning: this isn’t my area but it is my blog so I’m going there anyway]

One reason figure 2 is so alien to us is that it comes from a very alien place. Compared to a human being, a NN’s training data is extremely constrained. Yes they see millions of images, and that seems like a lot. But if we see a qualitatively different image every three seconds, and we’re awake 16 hours a day, then we see a million distinct images every 52 days. And unlike most image classification NNs, we see those images in sequence, which is additional information. Add to that inputs from the rest of our senses, plus intuitions we get from being embodied in the world, plus feedback we get from social learning, plus the ability to try and physically change the world, and it starts to become obvious why we zero in on different things from NNs.

In particular, NNs are (today) trained to perform very well on narrow tasks. Human beings navigates far more diverse problems, many of which are one-of-a-kind. That kind of diverse experience gives us a better framework for understanding “how the world works” on the whole, but less expertise with any one problem. When faced with a novel problem, we can use our blueprint for “how the world works” to find applicable knowledge from other domains (figure 3). And this skill of transferring knowledge across domains is one that we get better at with practice, but which requires knowledge of many domains before you can even begin to practice.

Figure 3. “I gave it a cold.”

My earlier post on the use of metaphor in alchemy and chemistry illustrates how a better blueprint for “how the world works” can dramatically improve feature selection. Prior to 1550, alchemists used metaphor extensively to guide their efforts, but it mostly led them astray. They chose metaphors on the basis of theological and symbolic similarities, rather than underlying interactions and processes. This isn’t a bad idea, if you think the world is run by supernatural entities with a penchant for communicating revelations and other hidden knowledge to mankind. But a better understanding of “how the world works” (i.e., according to impersonal laws) allowed later chemists to choose more fruitful metaphor than the alchemists.

When I see something like Figure 2, I see an intelligence that hasn’t learned how the world “really is.” Animals and physical objects are clumps of matter, not diffuse color patterns, no matter how much those color patterns align with previously seen pixel combinations. But I can see how it would be harder to know that if you hadn’t handled animals, seen them from different angles, and been embodied in physical space.

So I think one reasons human metaphor transfers knowledge so well is that it has so much more diverse training data to draw on. We pick deep features with an eye on “how the world works.” So why don’t AI companies just give their own NNs more diverse training data? One reason is that important parts of the structure of NNs still have to be hand-tuned to the kind of training data. You can’t just let loose an image classification problem on the game of Go and expect to get comparable results. There seems to be a big role for the architecture of NNs.

Whatever the “right” architecture is for the diverse training data humans encounter, evolution seems to have found it. But it took a long time. Evolution worked on the problem for hundreds of millions of years in parallel over billions of life forms. For contrast, AlphaGoZero played 21mn games of Go to train itself. At one hour per game, that works out to a bit under 2,400 years, if the games were played at human speed one at a time.

In a sense, I think this makes NNs more impressive – look how much they’ve done with the equivalent of a paltry 5,000 years of evolution! But I also think it provides a warning that matching broadly human performance might be a lot harder than recent advances have suggested.

 

Categories
Uncategorized

Faking Genius

Geniuses are rare in life, but common in fiction. No offense to our writing class, but I suspect a lot of these fictional geniuses are written by smart-but-not-genius writers. But how can this be? How does a non-genius author write a genius character? If the character is smarter than the author, then their thoughts and decisions are, by definition, the kind of things the author wouldn’t think of when in that situation!

How do you fake genius? I’ve noticed three strategies authors use.

House, M.D.

The Genius Who Knows Lots of Stuff

This is the most common and, to me, most annoying strategy. It treats geniuses as little more than people who know lots of facts. I haven’t watched that much House M.D., but from what I’ve seen he’s an archetype of this format. Someone comes in with weird symptoms and House is the only one who knows about the rare disease that matches the symptoms. He is a walking storehouse of weird disease trivia (I know, I know, there’s more to him than that, it’s an illustration not a criticism of the show).

This is a pretty easy strategy for a writer to implement. The writer just uses google and a bookshelf to give the genius a torrent of factoids to say. But it’s also the strategy that leaves me cold, precisely because it’s easy to implement. It’s no more illuminating than flipping through an old set of Trivial Pursuit cards.

A twist on this type is the genius who knows which facts are the right ones. In this case, the author lays down the “real” clues, but then buries them under a pile of extraneous detail and red herrings. The author then makes the genius (usually a detective here) able to sniff out the real clues from the red herrings. The veracity of the “real” clues is proved when they solve the problem. Maybe they point to a villain who confesses or tries to kill the protagonist when outed. Or maybe they point to a treatment that cures the patient. Afterwards, the audience is satisfied that the real clues were there, ready to be seen, but we stand in awe of the genius’ “ability” to see what we had missed.

Geordie La Forge

The Genius Only Intelligible via Metaphor

The next type of genius is so much smarter than us that his speech is incomprehensible. We, the audience, are like dogs trying to understand humans. We recognize some of the words (frequently the word is quantum), but their connections are baffling. Frustrated, the genius then explains the gist of his idea with a simple metaphor that we can understand. Often the genius has to be prompted by someone saying “in English please!”

Star Trek’s tendency to do this was lampooned on Futurama (episode 412, “Where No Fan Has Gone Before”):

Leela: I didn’t wanna leave them either, Fry, but what are we supposed to do?
Fry: Well, usually on the show someone would come up with a complicated plan then explain it with a simple analogy.
Leela: Hmm. If we can re-route engine power through the primary weapons and reconfigure them to Melllvar’s frequency, that should overload his electro-quantum structure.
Bender: Like putting too much air in a balloon!
Fry: Of course! It’s so simple!

Star Trek is hardly the only party guilty of this trick. The Marvel movies do this when Bruce Banner and Tony Stark talk, for example. It’s not absent from more high-brow stuff either (e.g., Dr. Shevek’s explanations of his new physics in Ursula K. Le Guin’s The Dispossessed). This strategy seems to be used a lot in science fiction, precisely because in that genre we are dealing with technologies that haven’t been invented. If the author could explain exactly how they worked, then it wouldn’t be science fiction!

I think this method can actually be very effective. I am fond of this example, from the independent movie Travelling Salesman. In the movie, one of the protagonists has cracked the P versus NP problem. In brief, a proof related to the problem would allow allow us to solve difficult problems (like finding the factors of large numbers) at super speed. This is an open problem, so of course the writers can’t describe how it would really be solved. Instead, they use the following metaphor:

Tim: What if I took something like a quid coin, ok, and I buried it in the [desert]? It’s buried, you have no idea where it is, and I ask you to find it. How long would that take you?
Hugh: (scoffs) well-
Tim: Years, right, I mean millions of years if the desert were big enough.
Hugh: Sure
Tim: What if I melted the sand? Took all the sand in the desert and melted it. Glass. The whole desert becomes one big sheet of glass. So now finding the coin is easy, right? You just– you see it floating there. Change the sand to glass and finding the coin is trivial.

The metaphor conveys the idea that the genius has found a way to peer through all the complexity of a problem and see straight to the answer. But the writer doesn’t actually have to explain how it’s done.

This strategy is easy enough to write. You need a lot of complicated technical-scientific-literary buzzwords. You need a metaphor for the genius’ idea, but you don’t have to have the details worked out. Then you just alternate between the two modes of expression as needed. It’s kind of the empty calories of insight, because it gives the feeling of understanding without the reality, but I still prefer it to “the genius who knows lots of stuff.”

Ender Wiggins

The Handicapped Genius

A final type of genius is more satisfying, at least to me. In this case, the genius operates under a handicap so that exhibiting high (but not genius) intelligence by modern standards is itself proof of genius. A great example is Ender Wiggins, from Ender’s Game. In the book, Ender has a number of cool insights about warfare in three dimensions, and in general exhibits adult level intelligence. But he’s only six years old! A six year old exhibiting adult level reasoning is believable as a genius.

Another common twist is to put your genius in the past and have them be ahead of their time. The character of Thomasina in Arcadia is an example. In the 1800s and without the aid of computers, Thomasina discovers fractals (actually discovered by Benoit Mandelbrot in the 1970s). When the character is ahead of their time, the author has an excuse to illustrate the derivation of actually brilliant things (like fractals). But because these things have been discovered, with a bit of research, the author can learn how they were initially derived and then just copy that for their genius.

This strategy is more satisfying to me than the others because it usually exhibits reasoning from A to B, illustrates the connections between ideas, and so on. The facts aren’t just a torrent, but form a web of relationships. And for characters who are ahead of their time, you might get an idea of what it’s like to be inside the mind of genius. The catch is, you are actually reading a sort of disguised biography of whatever genius discovered the thing (like fractals) we are pretending was discovered by the fictional genius.

How to Fake Genius

So that’s how I’ve seen it done. Despite the tone of the above, I actually don’t think these are bad places to start.  These strategies do get at some truths about geniuses: they do know a lot of stuff and they frequently are unintelligible unless they talk down to us using simple metaphors. Read a pop-physics book for copious examples.

But I would love it if we could go further. If I could feel what it’s like to really be in the head of a first class mind, that would be great. Can we do better? I’m not a writer of fiction, a genius, or even a psychologist, but I have done some research on “innovation,” so I’m not 100% unqualified to make some suggestions. Specifically, I think a good genius character should have the following characteristics:

  1. Geniuses know many things.
  2. Geniuses think with both speed and endurance.
  3. Geniuses think clearly.
  4. Geniuses have a lot of working memory.
  5. Geniuses make unusual connections between disparate concepts.

Most of these traits are not that hard to fake with time and tools. Let’s take them in turn.

1. Geniuses know many things

This is the one nearly everyone gets right, so I’ll be brief. Google and libraries are your friend. A team of writers can pretend all their accumulated knowledge fits in one genius’ head.

2. Geniuses think with both speed and endurance

Another easy one to fake. The writer can ponder the perfect witty retort for their genius for an hour, a week, or a year. But when they put pen to paper, it will seem as if it was instantly on the genius’ lips.

Geniuses are also frequently capable of intense focus for long periods of time. The author can afford to be scatter-brained, as long as they have more time than the genius to ponder. The audience need not know that one day of focused attention by the fictional genius took the writer a few months of scattered attention.

3. Geniuses think clearly

By this, I mean that geniuses don’t make weak arguments and logical errors. Unfortunately, as laid out at length in Mercier and Sperber’s The Enigma of Reason, individuals have a hard time objectively evaluating the strength of their own arguments. This is bad news, because Mercier and Sperber also present evidence that humans are quite good at objectivley evaluating the arguments of others. If the author gives their genius a bad argument, the audience is more apt to spot it than the author.

Fortunately, we can use the same trick to our advantage. A good way to ensure your genius’ arguments are strong is to find a partner (or multiple partners) who you can talk them over with. A group of debaters, each of whom is individually biased towards their own argument, can nonetheless form a very clever collective intelligence because they can objectively evaluate each other. An author can, however, bring these disparate voices into the head of a lone genius to make the collective mind a singular one.

4. Geniuses have a lot of working memory

On average people can hold about 7 pieces of information in their head at the same time, some more and some less. Geniuses, I presume, can hold more. This is important because it’s much easier to see connections between ideas that are held in working memory. Thus, geniuses can perhaps see how larger sets of facts are connected to each other.

Now we are getting into terrain a bit harder to fake. Pen and paper are useful tools for keeping facts close to hand if not in your brain. You can work out the genius’ idea with lots of time and paper (including a lot of paper that is discarded) and then pretend it all happened in their head. Another possible technique is to “chunk” several pieces of information into a broader concept, so it can be worked with. This takes longer than it would for a genius (you have to spend time understanding the chunked concept), but it’s a price of faking genius.

5. Geniuses Make Unusual Connections Between Disparate Concepts

This is hardest to fake. One possibility is to mine your own life for the top 2 or 3 epiphanies and then to reverse engineer a scenario for them to emerge. It helps if you keep a record of your thoughts. Alternatively, you might pick a few disparate subject areas, read deeply in them, and attempt to harvest a surprising connection or two. Again, reverse engineer a setting for those connections to emerge. In either case, the goal would be to make it seem as if these kinds of realizations are ordinary events for the fictional genius.

Fake Geniuses Among Us

It’s irresistible to wonder if we can’t use similar tricks to fake genius in our real lives. I think it’s not only possible but common. Indeed, this is the kind of thing academics and scientists do all the time. We cite things we haven’t read carefully. At seminars, only one person presents, even if the work came from many. Our papers omit the missteps, dead-ends, and other frustrations of research. There’s no place in a paper’s methods section to write “then I thought about the problem off-and-on for two years.” We talk our ideas through at length with our colleagues. We use computers and paper to augment our paltry memory. And we pick and choose research questions that are well suited to the weird ideas we want to explore.

If there’s a larger point, it’s this: I am suspicious of the notion that the difference between us and geniuses is one of kind and not merely of degree. I am suspicious that they can ever be incomprehensible, so long as we give ourselves sufficient time and tools to work out their thoughts. Brainpower, time, and thinking tools are all inputs into great ideas, but to a large degree I think we can substitute the latter two for the first.

 

Categories
Uncategorized

Finding the Right Metaphor

Last post introduced the hypothesis that having more metaphors equips people to innovate, because it expands their set of tools for thinking about novel settings. It presented some reasons to think this is true. This post pushes back on that simplistic hypothesis. Having the right metaphor is only half the battle; you also have to find it.

Could it be that having more metaphors is sometimes a handicap? First off, having more metaphors to search through could make it harder to find the right one. Second, having more metaphors might cause a problem analogous to over-fitting. You might have a metaphor that fits many of the surface details of the problem at hand, but not the smaller number of crucial “deep features.”

Metaphors in Alchemy and Chemistry

This is an interesting idea tossed out by David Wootton in The Invention of Science.

For classical and Renaissance authors every well-known animal or plant came with a complex chain of associations and meanings. Lions were regal and courageous; peacocks were proud; ants were industrious; foxes were cunning. Descriptions moved easily from the physical to the symbolic and were incomplete without a range of references to poets and philosophers. [p. 81]

Wootton suggests this formed a sort of reasoning trap. Every potential metaphor had so much baggage in the form of irrelevant features that it became hard to use them well.

I think the use of metaphors in alchemy provides a good illustration of this phenomena. Gentner and Jeziorski (1990) compare the use of metaphor/analogy in alchemy (prior to 1550) and chemistry (post 1600s). Compared to chemistry, alchemy seems to be endlessly led astray by metaphors chosen on the basis of red herring associations.

As explained by Gentner and Jeziorski, a goal of alchemists was to transmute base metals into more valuable ones via the “philosopher’s stone.” This stone was often called an egg, since eggs symbolized the “limitless generativity of the universe.” From Genter and Jeziorski, a sampling of alchemical thought:

1. It has been said that the egg is composed of the four elements because it is the image of the world and contains in itself the four elements…
2. The shell of the egg is an element like earth, cold and dry; it has been called copper, iron, tin, lead. The white of the egg is the water divine, the yellow of the egg is couperose [sulfate], the oily portion is fire.
3. The egg has been called the seed and its shell the skin; its white and its yellow the flesh, its oily part, the soul, its aqueous, the breath or the air.

As an egg is composed of three things, the shell, the white, and the yolk, so is our Philosophical Egg composed of a body, soul, and spirit. Yet in truth it is but one thing [one mercurial genus], a trinity in unity and unity in trinity – Sulphur, Mercury, and Arsenic. [p. 12]

Even as brilliant a mind as Isaac Newton got lost in the alchemical thickets of meaning. Mercier and Sperber give us this sampling of his alchemical musings:

Neptune, with his trident leads philosophers into the academic garden. Therefore Neptune is a mineral, watery solvent and the trident is a water ferment like the Caduceus of Mercury, with which Mercury is fermented, namely, two dry Doves with dry ferrous copper. [p. 326]

Gentner and Jeziorski contrast this style of thinking with the use of metaphor/analogy by later chemists. For example, Robert Boyle, trying to illustrate how individually minute effects can have large-scale effects in aggregate, uses metaphors of ants moving a heap of eggs and wind tugging on the leaves and twigs of a branch until it snaps off (Gentner and Jeziorski p. 8). Sadi Carnot uses the metaphor of water falling through a waterfall to understand the flow of heat through an engine.

What’s interesting is that neither Boyle nor Carnot used a novel metaphors that had been unavailable to the alchemists (although the math associated with Carnot’s model was new). It wasn’t that the availability of new metaphors that differentiated chemists from alchemists. It was the selection and use of existing ones.

It’s About Selection

Metaphors are an effective form of amateur modeling. When facing a novel situation, we take a leap of faith that a similar and known situation will serve as a useful map of the unexplored terrain. Having more metaphors at your fingertips increases the chances one of them will be a good map for the situation. But the ability to find this metaphor matters nearly as much as having it. We need a library that is large but also organized.

Hofstadter and Sander argue metaphor selection is the real test of domain expertise. There are a lot of dimensions along which a metaphor can match the thing to be explained. I’ve written before that its the deep features that need to be matched, but there aren’t universal guidelines for differentiating a deep feature versus from a surface feature. Alchemists thought surface similarity to an egg was the important feature for proto-chemical work. That seems silly to us now, but I suspect that’s because we assume the world operates according to impersonal laws. If you believe the world is instead run by supernatural entities with a penchant for communicating revelations and other hidden knowledge to mankind, then any features with theological symbolism probably seems like the deepest ones of a problem.

I can’t resist one more example. Crosby, in his excellent book on quantification in Europe over 1250-1600, makes a similar point about something as seemingly association-free as numbers:

Western mathematics seethed with messages… in the Middle Ages and Renaissance. Even in the hands of an expert – or, especially, in the hands of an expert – it was a source of extraquantitative news. Roger Bacon, for instance, tried very hard to predict the downfall of Islam numerically. He searched through the writings of Ma’shar, the greatest of the astologers who wrote in Arabic, and found that Abu Ma’shar had discovered a cycle in history of 693 years. That cycle had raised up Islam and would carry it down 693 years later, which should be in the near future, Bacon thought. The cycle was validated in the Bible in the Revelation of St. John the Divine 13:18, which Bacon thought disclosed that “the number” of the Beast or Antichrist was 663, a number certain to be linked to other radical changes. [p. 121]

Nevermind that the number of the beast is 666 – Bacon’s Bible apparently had a typo – and that neither 666 nor 663 is equal to 693! In this era, those were not “deep features” of metaphorical similarity.

Hofstadter and Sander essentially say practice and subtle skills determine how good an expert is at selecting the right metaphor. For many domains, that is surely correct. But I see two other tricks for organizing our library of metaphors.

Unification

The first trick is unification. If we can subsume individual cases under more and more universal ones, then we reduce the number of metaphors we have to search through. We also reduce the risk that we overfit to a metaphor with many surface similarities. To stick with our metaphor of a metaphor library, this is like keeping one book from a shelf.

For Hofstadter and Sander, this is essentially what categorization is. They give the example of the category mother emerging from a child’s encounter with more and more people who are somebody’s mother. Whereas the child may initially have to remember a large set of disconnected facts  – Rachel is a mother, Sarah is a mother, Thomas is not a mother, Rebecca is not a mother, etc. – over time the category of mother emerges. When a new person is encountered, the child no longer has to mentally compare them one by one with Rachel, Sarah, Thomas, Rebecca and others. Instead, it can quickly fill in a lot of gaps in its knowledge about the person if it finds out she fits in the mother category.

This is common in science, where models belong to the same family as metaphors (link). For example, many results in economics might initially be derived using a specific functional form. We might assume demand is given by the equation q = A – Bp, where p is a price and A and B are positive numbers. Or we might assume its given by q = ABlogp or q = Ap^-B. This requires us to carry around all these different equations in our minds. Life is much simpler when we are able to generalize our result to any continuous function where demand is falling in price.

You can see the same push for unification across most domains. It’s probably taken to its extreme in physics, where the quest for a single unified theory for the universe is taken to be a holy grail. From my outsider’s perspective (correct me if I’m wrong), historians seem to lie on the opposite end of the spectrum. In that field, detail matters.

Source: XKCD

With unification, we sacrifice details but we hopefully get the big picture right. It’s usually worse to get the big picture wrong than to get the details wrong, and since unification helps us zero in on the right “big picture” metaphor, it’s valuable. But unification becomes a problem when the knowledge domain resists it. This can happen when the details matter.

Systemization and Dani Rodrik’s Growth Diagnostics

When you can’t subsume everything under one case, you have to organize the cases. It’s fairly common to organize metaphors into big categories and then leave it at that. I’ve done that myself, grouping the representations used by innovators into five categories: rules, probabalistic statements, metaphors, neural networks, and instantiations. But you can also create rigorous processes for sorting through these categories and pinpointing the right metaphor for a given situation. The best example of this that I know of is Dani Rodrik’s growth diagnostics.

A bit of context is necessary to explain this. Rodrik is (among other things) a development economist. His growth diagnostics is a process for finding the correct economic model in that context. The problem development economists try to solve is why some national economies fail to grow at desired rates. Economic growth is a very complex and poorly understood phenomena, and there are many competing models of the process. Each of these models is a simplification, and each provides different kinds of policy advice . One might emphasize the rule of law and secure property rights, another investment in education, and a third subsidies for favored industries. Rodrik’s growth diagnostics helps you pinpoint the model most applicable to the setting. Access to a good model then allows you to think through what the impacts of various policies might be.

Figure 2. Growth Diagnostics (from One Economics, Many Recipes by Dani Rodrik)

Growth diagnostics is basically a decision tree (figure 2). It starts at the top with the problem: insufficient private investment and entrepreneurship. Rodrik divides the possible causes of this into two categories: a low return to economic activity, or a high cost of finance. He then provides some suggestions for what kinds of evidence to look for to determine which is the case (e.g., “are interest rates low?”). Suppose we have decided there is a low return to economic activity. Moving down the tree, is the problem that there aren’t socially beneficial things to do (low social return), or merely that the private sector cannot find a way to make useful things profitable (low appropriability)? Again, Rodrik suggests specific things to look for to help you determine which branch of the tree you should descend to.

Following the tree gets you down to a simple economic model of what is constraining growth. Economics has failed to discover a single model to explain everything, but with a procedure for finding the right model, it remains useful. It illustrates how a well-stocked library of metaphors can be made maximally useful whenc combined with a framework for zeroing in on the right one.

Summary

Innovation, by definition, requires stepping out of the familiar and into the unknown. These sojourns go better for us when we have a map of the territory. Metaphors can serve this function; they assert the unknown is “like” the known. Having more of these maps is useful, because it’s more likely one of the maps is a good fit (link). But this is only true if we can find that map. Maps with too many details can lead us astray, because the details might fit really well but the big picture is off. If having more maps is going to be useful, we need to organize them. One way is to prune our collection to a small number of maps with only the most important details. The other is to create a meta-map over our maps: a process for determining the deep features of the situation and matching them to the corresponding metaphor.

 

Categories
Uncategorized

Do Metaphors Make Us Smarter?

One of the ways we navigate a world full of novel situations and problems is by using metaphors. When facing a new situation or problem, we take a leap of faith that things we don’t know are like things we do know. We search through our memory and find a situation that is “similar” to the one at hand. We use that previous experience as a metaphor for the current one. If it’s a good metaphor, it’s a roadmap through the unknown.

An  implication of this is that having more metaphors and more diverse metaphors is a powerful asset for thinking. All else equal, having more metaphors increases the chances one of them will be a good fit for your current problem. In this post, I’ll present some arguments that suggests this is the case. The next post adds some nuance:  our ability to navigate our personal library of metaphors matters as much (or more) as its size.

Three Cheers for Metaphors

I’m unaware of any study that directly compares innovation and the number of metaphors (drop me a line if you do). But there are a few lines of evidence that strike me as at least consistent with the theory.

1. Metaphors to Solve Problems

The closest thing we have to a direct test of the theory are psychology experiments.

  1. Give some subjects a metaphor well suited to solving a problem.
  2. Don’t give it to a control.
  3. See if the metaphor-equipped group is better at solving the problem.

Gink and Holyoak (1980) is an early example of the type. The authors asked study participants to solve the following problem. A patient has a stomach tumor that must be destroyed. No surgery is permitted, but a beam of radiation can destroy the tumor without operating.  The problem is that any beam strong enough to kill the tumor is also strong enough to kill the tissue it must penetrate to reach the tumor. Any beam weak enough to leave the healthy tissue unharmed is also too weak to destroy the tumor. What should the doctor do?

While you ponder that, let me tell you another story. Totally unrelated, I swear. Once upon a time there was a general who wanted to capture a city. His army was large enough for the task, and there were many roads to the city. Alas, each road was mined. Any force large enough to take the city would detonate the mines as it moved down the road. A smaller force could move down the road without detonating the mines, but would be too small to take the city. What to do?

Fortunately, the general came up with a solution. He divided his army into many smaller divisions, and sent each down a separate road. Each division was too small to detonate the mines, and so they all converged on the city at the same time, and captured it.

Wow, what a neat story.

Now, have you figured out the tumor problem?

The trick is to use many weak beams of radiation, each pointing to the tumor from different directions. They should all intersect at the tumor’s location but nowhere else. Their combined energy will destroy only the tumor, and not the healthy tissue that must be penetrated.Figure 1. Converging armies of radiation

The preceding is basically the experiment that Gink and Holyoak perform. They give people the tumor problem. Only 2 in 30 people were able to solve it on their own. Then they tell them the story of the general. This story’s solution is a tailor-made analogy for the tumor problem (converging weak forces), and 14 of 35 people were able to solve the tumor problem if they got the general story. An additional 12 people came up with the solution after given a hint to use the story to figure out the tumor problem.

A meta-analysis of 57 similar experiments obtains similar results. The intervention reliably has a small-to-medium sized effect. Giving people a solution wrapped up in a metaphor helps them find the solution. A bit.

2. Metaphors and Forecasting

Do these lab results carry over into real world settings? Forecasting  forms a set of problems that are not contrived but do have definite “correct” answers. Phil Tetlock has been asking people to make political and economic forecasts for decades, and tracking the results.  Drawing on a very old dichotomy, Tetlock classifies his forecasters as either “foxes” or “hedgehogs.” The idea is that “the fox knows many things and the hedgehog knows one big thing.” Tetlock consistently finds that foxes are better forecasters than hedgehogs.

Now, this isn’t really a direct test of the hypothesis. Tetlock isn’t “counting metaphors” for his forecasters. He classifies his thinkers as foxes or hedgehogs based on a series of questions about cognitive thinking style. These include agreement with statements such as “even after making up my mind, I am always eager to consider a different opinion” and “I prefer interacting with people whose opinions are very different from my own.” In general, Tetlock uses language like “hedgehogs view all situations through the same lens” and “foxes aggregate information, sometimes contradictory information, from many different sources.”

However, there is evidence that foxes make use of more and different analogies. Tetlock notes “foxes were more disposed than hedgehogs to invoke multiple analogies for the same case (Tetlock, p. 92).” And it seems to help them navigate novel situations.

3. Metaphors and the Individual

There are also some observational studies consistent with the idea that more metaphors make us smarter.

  • Highly creative people (which we can use as a proxy for innovation) tend to be open to new experiences and curious (Sawyer, p 64). These are two channels through which people may acquire additional concepts that can be used as metaphors. People who have lived abroad
  • Multicultural individuals show more creativity. Living in a different culture is, of course, a rich source of new metaphors.
  • Scientific “geniuses” tend to have broad interests (Simonton, p. 112): they have more diverse hobbies (painting, art collecting, drawing, poetry, photography, crafts, music) and are voracious readers, including extensive reading outside their main discipline. Again, this is hardly a direct measure of the size of their metaphor libraries. But broad interests would tend to foster a larger and more diverse set of potentially useful metaphors.

Of course, alternative and additional explanatory factors are also possible. But these threads are at least consistent with the story that having access to more metaphors facilitates innovation.

4. Metaphors over Time

It’s not hard to establish that the set of metaphors has grown over time. In their book on metaphor, Hofstadter and Sander compile an illustrative list of concepts unavailable to most generations of humanity. Each of these is available as a metaphor to people alive today, but not to people living, say 100 years ago. Here are 25 examples from their list of over 100 (p. 129):

  • DNA
  • virus
  • chemical bond
  • catalyst
  • cloning
  • email
  • phishing
  • six degrees of separation
  • uploading and downloading
  • video game
  • data mining
  • instant replay
  • galaxy
  • black hole
  • atom
  • antimatter
  • X-ray
  • heart transplant
  • space station
  • bungee jumping
  • channel-surfing
  • stock market crash
  • placebo
  • wind chill factor
  • greenhouse effect

If these concepts are a better metaphor for some novel situations, then denizons of the modern world have a leg-up on their ancestors. They are equipped with a bigger toolbox for handling novelty. This could be one factor explaining the Flynn effect, whereby the IQ scores on standardized tests rise in each generation.

Additionally, there are many scientific theories whose discovery depended on metaphors that were not always available. The proliferation of clocks in Europe may have made Europeans amenable to thinking of the universe as a machine following strict natural laws, instead of the whims of spirits (Wootton link). Neils Bohr used the heliocentric model of the solar system as a metaphor for the atom – a metaphor that would have been basically unavailable before Copernicus. And Einstein used principles derived from Newton’s classical physics to guide his hunt for the theories of special and general relativity. Again, the evidence is at least consistent with more metaphors being an asset to thinking.

5. Metaphors across Geography

More controversially, some have speculated that similar channels explain the correlation between economic prosperity and test performance (IQ, standardized math and others). GDP per capita is positively correlated with the average performance of a nation on standardized tests. A lot of people argue that this is because human capital/intelligence/IQ (whatever you want to call it) leads to economic prosperity (more innovation, better policies, more cooperation, more patience, etc.). But the causal arrow could just as well point in the opposite direction. Countries with more economic prosperity tend to have more complex economies (link to Hidalgo), more literacy, and greater access to digital information. All three of these channels may well expose the typical citizen to a more diverse set of concepts and processes. And these concepts and processes will then be available as metaphors. This bigger library of metaphors could then be a reason people in these countries do better on standardized tests.

Finally, just as there is some evidence multicultural individuals are more creative, countries with populations from lots of different places tend to have more patent intensity and economic prosperity. Again, this is hardly a direct test, but we can imagine people from different countries bring different sets of metaphors with them. A country with people from many different countries might have a more diverse set of metaphors, which could partially account for their higher performance in innovation.

A Virtuous Circle?

Item #1 above provides the most direct evidence that access to metaphors facilitates problem solving. The remaining items all show that more diverse people, information, interests, and concepts tend to lean in the same direction as various metrics for innovation. Correlation isn’t causation, but it’s possible the set of available metaphors is a causal link between the two. If that’s the case, then as society gets more metaphors it gets better at innovating. Maybe we are living in a virtuous cycle where innovation leads to social complexity and social complexity leads to a wider set of metaphors and access to a wider set of metaphors leads to innovation!

Figure 2. A Virtuous Circle?

I think there is some truth in that story, but also that it’s only part of the story. Maybe a small part. For items #2-#5 above, the evidence is pretty indirect. We don’t know if people really do expand their set of metaphors via the hypothesized channels, and we don’t know if they use those metaphors to innovate. Lots of other things are going on, and we don’t know how much those other factors matter. We also can’t be sure this isn’t a spurious correlation wherein “smart” people have diverse interests, but these don’t inform their ability to innovate.

Item #1 provides the most direct evidence that metaphors are causally related to innovation. However, even when we give people a perfect metaphor right before we test them, the effect is not large. And if we wait a day to test them, performance declines. It appears that there is a lot more to innovation than simply having access to the right metaphor.

Staying in the “metaphor” or having a library of metaphors, a major problem might be our ability to search the library. Which metaphor is the right one for a given problem? This is a problem we turn to in the next post…

 

Categories
Uncategorized

Innovation: Why do Metaphors Work?

The world is full of regularities. One way we encode information about these regularities is with metaphor.  When I write about “metaphor”, I don’t mean poetic comparisons. In this post, metaphors or analogies (I use the term interchangeably) encode information about regularities. They assert something we don’t understand is similar to something we do. Innovators then use these metaphors as maps to guide their travels in the unknown.

Some famous examples:

  • Using Uber as a template for a new kind of business (“uber for X”)
  • Applying the lessons of lean manufacturing to start ups (the “lean start up” model)
  • Viewing the atom as a miniature solar system (the Bohr model)
  • Taking the principle of “no privileged frame of reference” to accelerating and speed of light travel (special and general relativity)

In each case, a more familiar domain (existing business or scientific models) is assumed to apply in a domain that differs in fundamental ways.

This is utterly commonplace. But if you step back a bit and think about it, it’s puzzling.  Why does this leap of faith ever work? More specifically, why does it work better than chance? In some cases, we can maybe assert that the same phenemena underlie the two cases. For example, maybe all “Uber-type businesses” rely on the same underlying regularities. In that case, using Uber as a metaphor for another Uber-type business is really just a way of drawing lessons from the broader category of “Uber-type businesses.”

But I think there are many more cases where the examples do not appear to draw on the same underlying phenomena in a meaningful way. The Bohr model is particularly egregious; why should the behavior of planets have anything to do with the behavior of the atom? In fact, they differ in really important ways; yet it was a fruitful metaphor.

Before we answer these questions, let’s take a detour.

Economic Modeling

I’m an economist. Much of my professional life has revolved around little mathematical models of social phenomenon. These models are simplistic. They are hard to understand without training. And they don’t give us predictive ability with anything like the accuracy of physics. So why do we bother?

In a wonderful little book on economic methodology, Dani Rodrik provides some reasons. First off, the reason economists use math is not because we are so clever, but rather because we are so dumb. Math forces a model to have internal consistency. You have your assumptions, you have your conclusions, and there are unamibiguous rules for deriving the one from the other. Many things that seem obviously true when expressed in language are revealed to be internally inconsistent when expressed in math.

That explains the math, but not the simplicity. Why not more complicated and realistic models, built from math? There are a few reasons.

First, let’s discuss why simple models work at all in a complex world. Let’s assume the outcomes of any social process are derived from the interaction of underlying factors. There might be a huge number of these factors, and they can interact in all sorts of different ways. However, there’s no reason to believe these factors are equally important to the outcome. In all cases, some features will matter more than others. If a small number of interacting factors plays a big role relative to the others, then understanding those goes a long way to understanding the situation. If not, we label the problem “chaotic” or “random” or “complex.”

So there’s a selection issue at play. In most cases, a small number of factors will matter more than the rest. If we model those and say the rest is random, then we will do about as good as we can. In cases where a small number of factors is not sufficient to make meaningful predictions, we just don’t model it and we just make decisions at random.

So simple models can be useful. But shouldn’t a more complex model be even more useful? The answer would be yes, if we started with the correct simple model and then added some complexity. The problem is that as models get more complicated, it gets harder and harder for practitioners to identify which features matter most. The second reason economists use simple models (according to Rodrik), is to isolate important causal mechanisms. If you are going to get something right, it better be the most important part. You want to get the skeleton right, so to speak, not the elasticity of the skin.

What do you want to get right? Rodrik uses the term “critical assumptions.” For Rodrik, the “critical assumptions” in an economic model have a specific meaning. It is those assumptions whose modification produces substantive differences in the model’s conclusions. For example, if you want to know what will happen to employment when you raise the minimum wage, you can choose between at least two models. In a perfectly competitive model, an increase in the minimum wage will lower employment. But in a model where firms have market power over hiring (a monopsony model), an increase in the minimum wage may raise employment. Both models are “correct” so long as their critical assumptions are met. In this case, the critical assumptions pertain to the degree of market power for hirers.

Unlike particle physics, the social world is too complex to capture with “the one true model.” Instead, theoretical economics advances by adding many simple models to the library of economic knowledge. Simple models are preferred because they are frequently good enough and because the art of an applied economist is knowing which model to use. Simplicity makes it easier to choose the right model. A good economist understands the critical assumptions that must apply for a model’s predictions to play out in the real world.

Metaphors as Amateur Models

Back to metaphors. What makes for a useful metaphor? A metaphor needs to match features of the object/event to be explained. However, in any real world situation, there are a huge number of features that you could use to match. There are a correspondingly large number of possible metaphors. Good metaphors match the deep/structural features, rather than the surface ones. For example, suppose we encounter the following animal:

Figure 1. An Unknown Animal. What is the best analogy?

We’ve never encountered this creature before. We’re in the unknown. But we can make some reasonable predictions about it’s behavior by drawing analogies with things we do know. And we have a lot of features for metaphor/analogy. Surface level analogies such as the following may lead us astray:

  • Size: The animal is as big as a whale. Inference: like a whale, the animal is probably harmless.
  • Covering: The animal is feathered like a bird. Inference: like a bird, the animal does not view us as a food source.
  • Color: The animal is brown like my dog. Inference: like my dog, the animal is a friendly ally.

In contrast, the deep/structural feature that matters most is:

  • Predator: The animal is a large predator. Inference: like a bear/lion/alligator, I am in danger!

What makes a feature “structural?” In their exhaustive book Surfaces and Essences: Analogy as the Fuel and Fire of Thought, Hofstadter and Sander write:

In the case of problems to be solved, structural features are those whose alteration would change the goal of the problem or the pathways to solving the problem. [p. 340]

So the deep features are the ones that, if different, would make a big difference. An alternative perspective on what makes a good metaphor is about “structural” features. In The Stuff of Thought, Steven Pinker argues:

the power of analogy doesn’t come from noticing a mere similarity of parts… It comes from noticing relations among parts, even if the parts themselves are very different. [p.254]

Again; the key is to match the “deep” features, not surface similarities.

This language is remarkably similar to Rodrik’s thoughts on good economic modeling. Just as economists have a large set of models to choose from, we all have a practically boundless set of possible metaphors. And just as the key to a good economic model is getting the critical assumptions right, the key to a good metaphor is getting the structural features (and their interactions) right. Hofstadter and Sander even define structural features in a way quite similar to Rodrik’s critical assumptions. Both are the features that can’t be changed without significantly impacting your inference.

Indeed, modeling and metaphor appear to be part of the same extended family. Metaphors are like amateur models. And the utility of models in economics is that they serve as useful metaphors for complicated social phenomena real world.

Why do metaphors work at all?

If modeling and metaphor belong to the same family, then this provides some insight into why metaphors can be so useful. Economic modeling is a practice designed to give us (limited) insights in very complex settings. It does this by stripping things down to a small number of important causal mechanisms.

Just as with social phenomena, in any real world situation where we are searching for a metaphor, there will be a huge number of potentially relevant factors. By chance, some of these will matter more than others in determining the outcome we care about. Those are the ones we should get right. We do that by matching the deep features of the object to something we already understand and which shares the same deep features.

Metaphors work because in situations that we don’t shrug off as fundamentally unpredictable, a small number of features interact and drive the outcome. When metaphor works, I suspect it’s because the number of situations in the universe where a small number of features matter is much larger than the number of qualitatively different ways a small set of features can interact. For any possible way a small set of features can interact, there are probably a large number of corresponding examples. Each of these is a candidate for a useful metaphor. Each captures the way the small set of features can interact.

Take the Bohr Model as an example. If we care about the outcome “stability of an atom,” there are many possible features we could investigate: position of the atom in the universe, duration of the atom, number of constituent elements, size of elements, mass of elements, velocity of elements, etc. Some of these matter (number, mass, velocity), and others do not (position in the universe, duration of the atom). The set of features that drives the outcome is small, and so finding another example where similar features drive the outcome may be fruitful. The solar system is one such example, but there could have been others.

Why not models?

Economic modeling and the utility of metaphors both rely on a small enough set of features and interactions for our brains to track.  However, unlike modeling, metaphors bring with them the baggage of a large number of other “irrelevant” features. So why do we bother at all with metaphors? Why don’t we leap straight to models of features sets, as we seem to in economics? Why do we add the extra confusion of a second real world example, and all it’s baggage of irrelevant features?

Again, I think economic modeling has some lessons. When using a model to make inferences, there are two different mistakes we can make:

  1. Our model can be internally inconsistent
  2. Our model can be internally consistent, but we choose the wrong one.

Metaphors limit our possible mistakes to the second case.

Economists use math to ensure their models are internally consistent. I suspect metaphors perform the same function. A metaphor based on real world phenomena must be internally consistent, if only because it’s happened in the real world. If I’ve encountered A, then all the features of A must be able to fit together without contradicting each other. The existence of A in the real world has proven those features and their hypothesized interactions are internally consistent. Going forward, if I use A as a metaphor for a new situation, it’s possible that I’ve chosen the wrong metaphor, but at least I haven’t chosen something that’s impossible.

Metaphors have a couple of other useful features. Models and scientific theories are built up from the orderly interaction of different assumptions. With the exception of computer simulations and other black box techniques, we usually understand exactly what is happening in the model. This is necessary to maintain internal consistency and helps us identify critical assumptions. But it’s also limiting. The need to keep models tractable imposes severe constraints on the kinds of assumptions we can make, at least in the case of economics.

In contrast, metaphors are possible to use without true understanding. We can be familiar with something (e.g., the human body, love, traffic jams) and use it as a metaphor without having a deep understanding of how it actually works. This vastly expands our set of tools for inference, but at the cost of making it harder to identify which features are deep/structural.

Second, the need to maintain internal consistency means economic modeling is done in the language of math. More generally, models are built from  the interplay of rule-like propositions. Again; this is useful to insure internal consistency, but is simultaneously limiting. There are many other regularities and interactions in nature that are awkward to express in terms of rules. Metaphor lets us encode information about these regularities.

The example of the predatory dinosaur, from above is one such example. We can make inferences about how it might act, even though it would be awkward (I don’t say impossible) to derive these inferences from a mathematical model.

tl;dr

To sum up: metaphors are useful because the outcome of many situations is most determined by the interaction of a small number of factors. There aren’t that many ways a small number of factors can interact, so there are frequently real-world examples we can draw on that exhibit similar underlying interactions. Using real world examples (instead of models) is useful because it (1) ensures our model in internally consistent, (2) lets us use examples even when we don’t understand exactly how they work, and (3) frees us from the awkwardness of writing everything up in rules/math/logic.

 

Categories
Uncategorized

Rewarding replication in science

I recently attended a workshop on open science. Open science is about making scientific processes more transparent, data freely available, and papers viewable by all. Among the many potential benefits of an open science model is increased confidence in scientific findings. This is particularly relevant in the midst of the ongoing replication crisis in many fields. The crux of the replication crisis is that many famous findings turn out to be very difficult to replicate. This raises the worrying implication that the original findings were just noise or were methodologically unsound.

Researchers are adopting at least two complementary strategies to deal with the problem. First, methodologies are being tightened up. Second, there is an active effort to increase the number of replication studies.

Ahead of the conference, I spent some time with David Hull’s Science as a Process. I think Hull’s model of incentives in science can shed some light on how we got here, and also suggests some possible paths out. The rest of this post lays out Hull’s model, speculates how it can explain the emergence of the replication crisis, and offers two suggestions (one modest, one ambitious) for increasing the supply of replication studies.

Hull’s model of scientific incentives

Hull, a philosopher of science, developed his ideas after close observation of a small scientific community (taxonomists and cladistics). These scientists did not behave like disinterested rational truthseekers:

As it turns out, the least productive scientists tend to behave the most admirably, while those that make the greatest contributions just as frequently behave the most deplorably. [p.32]

For Hull, humans everywhere desire status. Scientists are no exception. He sees the genius of science as redirecting the selfish desire for status into the creation new knowledge. It does this by creating a “market” for ideas.

The currency of this market is citation (or more broadly, use and influence). To get “rich” you need to generate knowledge that is used (i.e., cited) by your peers. In doing so, you gain status in the community. But the only way to generate new knowledge is to “buy” support from other knowledge by citing it.

For this to work, there have to be some kind of property rights in the creation of knowledge. This is achieved by the priority system: whoever publishes first owns the knowledge. This doesn’t mean they can prevent others from accessing it. Indeed, the priority system enforces prompt disclosure. But it does mean subsequent citations of the knowledge accrue to its original discoverer.

As Partha and David (1994) point out, the priority system has some attractive features. Knowledge can’t be “unveiled” twice. Once you have it, you have it. The priority system nudges scientists to work on unsolved problems, because you get no credit re-proving known results. There’s also a practical reason for the priority system. In any system where credit is assigned to people who are not first, it would be possible to get some credit by reading the first publication, pretending you had the same idea, and writing a paper.

For example, suppose we split credit for discovery among all scientists working on a problem. After all, many scientists are frequently working on the same problem and it seems unfair to assign all the credit to whoever was first. This assigns a lot of weight to mere luck. However, such a system could be gamed by fast-follower scientists. Once an idea has been published, anyone can read it and have the idea. What’s to stop them from claiming they were working on it at the same time? (Remember, Hull does not assume scientists are paragons of virtue)

There’s a second attractive feature of Hull’s model. People are not very good at seeking out information that challenges their beliefs, but they are pretty good at pointing out flaws in others’ beliefs. Again, scientists are no different. Scientists are poorly suited to looking for holes and mistakes in their own work, but the citation market in science separates the task of generating and testing knowledge.

Why would a scientist verify the ideas of someone else? First, scientists can get their knowledge more broadly used by eliminating the competition. This requires tearing down rival theories and findings. Even human foibles like grudges and personal animosity can be leveraged into doing useful work in this model:

“…I think that the function of personal animosity in science is still an open question. It might after all serve a variety of purposes…  Scientists acknowledge that among their motivations are natural curiosity, the love of truth, and the desire to help humanity, but other inducements exist as well, and one of them is to “get that son of a bitch.”  [p. 160]

Second, if you want to generate knowledge that is used, you can’t build on foundations of sand. There is therefore an incentive to check the work you build on.

Bubbles in Science

This system has, on the whole, worked really well. We know more about the world than ever before, and we learned most of during the few hundred years that we had science. But it’s not perfect, and the incentives for replication are one shortcoming.

Replication work isn’t free. It takes a lot of time and effort to replicate a study, in some cases nearly as much work as the original study (there’s a reason it’s not part of the peer review process). I suspect the cost of research in general has gone up over the century (a topic for another post), and the costs of replication probably went up alongside it.

Meanwhile, what is the benefit of a replication?  Or, in the parlance of Hull, how does replication work enhance your status in the community? The benefits of replication are a bit more complicated than costs, because there are direct and indirect benefits. Let’s consider each in turn.

First, replication studies might be directly used and cited. However, because of the priority system, little credit is assigned to successful replications of earlier work. While some might cite the original finding and note it was successfully replicated, not all will. And very few will cite the replication without the original finding.

A replication that challenges the original finding will probably get more attention than a successful replication, if only because it’s a more surprising result. But I worry failed replications won’t be cited as much as they should be. To the extent they are evidence against particular theories, they (and the now discredited original finding) are less likely to be cited by work in that theory. They may be cited by rival theories who seek to discredit competition. But I suspect this only occurs while a field is contested. Once one theory is ascendant, harping on the null results of the defeated theory seems at best a waste of time and at worst punching down. In contrast, a positive original finding might be cited for decades or more. (As an aside, the use of null results seems likely to be influenced by similar issues)

Second, replication work indirectly affects citations through it’s impact on what knowledge gets used. A failed replication of finding x redirects subsequent citations away from x and potentially towards rival ideas. A successful replication of finding x indicates x is sturdy enough to build on and encourages citation of x. In either case, the benefit is only realized by the replicator if they “own” rival ideas to x (in the case of failed replication) or if they own x or related ideas (in the case of successful replication).

Compared to original research, it seems like Hull’s model undervalues replication work. A replication that takes nearly as much work as an original finding, for example, is unlikely to get as many citations as an original finding. For any activity, when costs are high relative to benefits, people will gravitate towards substitutes. What are substitutes for replication work? One such substitute is the reputation of the scientist. Those who have done good work in the past will be trusted without as much verification in the future. Another substitute is to freeride on the judgment of the community. If at least some people are still doing independent verification, then highly cited work is more trustworthy. Both substitutes for replication contribute to a Matthew effect. The rich get richer.

These kinds of dynamics seem prone to bubbles. Suppose a paper by a famous author is mistaken, but this is never discovered because replication would be too much work. Instead, the paper is cited based on the strength of the author’s reputation. As it accrues more citations, other scientists interpret this as an endorsement by the community. It’s citations are further increased. This in turn raises the profile of the original author. Their subsequent work (likely in the same area) is now more likely to be cited without verification. In this way, an entire research edifice may be built on sand.

When the bubble pops, you get things like the replication crisis.

Existing proposals to increase replication

So what to do? There is currently a big push underway to increase the supply of replications in science. The open science project helps by reducing the cost of replication. It tries to make the research process more transparent, and makes the original finding’s data freely available.

Other proposals I’ve heard try to make replications easier to publish. One proposal is for journals to accept or reject papers based on pre-registered research plans rather than results. Journals favor “surprising” results, which makes them loathe to accept both null results and successful replications. The proposal would force journals to accept or reject based on a methodological plan, before the outcome is known. Another recent journal accepts replications that were submitted to good journals, but rejected because the results were not surprising. To make the process painless, the journal would accept the peer review reports from the rejecting journal. A third proposal would commit journals to paying for replications of a random sample of accepted papers. Other proposals would integrate replications into graduate school training.

A modest proposal

I see no problem with trying these. But Hull’s model suggests the root of the problem is that replication work does not confer status. This is because it is not likely to be cited. My modest proposal is to increase the number of citations to replications by empowering journal editors to add them.

There is already precedent for this in an adjacent knowledge creation field. Patents also have to cite relevant “prior art,” whether in the form of other patents or not. Patent examiners can and do add citations to patents (omitted by the applicant) if they feel the citation is relevant. Today, these examiner-added-citations are indicated on US patents with an asterisk, but they are in every other way treated as a “normal” citation.

The format is not important, but I imagine this could look like figure 1. Replication studies are listed below the original finding, indented and perhaps accompanied by an asterisk (to indicate they were added by the editor).

citation exampleFigure 1. Possible editor-added replication citation format

Would these citations confer status? I think so. Just as citations to original findings bolster a paper’s support, so too do citations to replications of those findings. Moreover, just like every other citation in the paper, they would credit the replication author by name. Furthermore, to the extent that a scientist’s career is summarized by various citation metrics (h-index, total sum, euclidian norm), these citations would “count” just as much as the rest. And finally, to the extent that journals chase citations too, an increase in citations to replications would increase their willingness to accept replications for publication.

I think this would help. Over time, if enough replication happens, a new norm may emerge in science wherein replications are cited without input from journal editors (encouraging such a norm has been suggested by Coffman, Niederle and Wilson). But it’s not a perfect system either. Which brings me to a second proposal.

A not-so-modest proposal

One problem with the above, is it creates scope for replicators to “free-ride” on highly cited papers. Think, for example, of hordes of graduate students running the same code on the same data, both made available by the original authors of the finding. You could end up with dozens of “replications” that add little value. This issue could be mitigated by editor discretion and the difficulty in judging what papers will be highly cited. But a bigger problem is that the above solution does little to address the problems of replications that challenge the original finding. These failed replications often push scientists into completely different research areas. There are no papers left to add citations to.

A more ambitious proposal is to try to estimate the marginal impact of a replication study on the original finding’s citations. For clarity, suppose a paper’s citations can be expressed by a function c(f,t), where f is the set of paper features that impact citation and t is the probability that the paper is “true.” This function is estimated empirically, with t possibly corresponding to the probability a replication effort matches the original finding. I assume c(f,t) is increasing in t.

The value of a replication study is given by:

replication value = | c(f,t’) – c(f,t) |

where t’ is the updated probability of truth after a replication (using Bayes rule). The replication value is the absolute value of the change in the original finding’s citations induced by the replication.

This formula has a number of nice properties. Successful replications raise t and their value is equal to the increase in citations associated with the rise in t. Conversely, when the replication challenges the original finding, the original finding receives fewer citations and the replication is rewarded for directing research away from the area. In either case, the value is larger for original findings with feature sets f such that they are highly cited if true.

Second, if we use Bayes’ rule to update our estimate of truth, early replications move t a lot, and are rewarded accordingly (a sort of generalized priority system). It may also be possible to design ways of incorporating the quality of the replication (for example, bigger samples might count as better evidence), so that high quality ones have a bigger impact on t’. Along the same lines, if we can measure the correlation between different replication efforts, this could also be incorporated. Replications using the same dataset will likely have outcomes highly correlated with the original finding, and therefore provide less evidence (they move t’ less) than those that gather new data. All of this would serve to nudge scientists towards the most valuable replication work.

Would these replication value scores have any impact on scientist status? I admit this is less clear. When  c(f,t’) – c(f,t) > 0, the replication value could be used in conjunction with the first proposal to decide which replications to cite on any paper. In this way, it would really “get names on papers.” A more challenging case is when  c(f,t’) – c(f,t) < 0. In this case, authors may be hostile to having negative evidence added to their citations. Moreover, there may not be any papers citing the original finding that the replication can be attached to!

At a minimum, however, the replication value could be reported alongside other citation based indicators. Moreover, although I have not emphasized it in this post, citations are not the only way to obtain status in science. Professional recognition can take many forms: promotions, prizes, fellowships, appointment to the leadership of professional societies, etc. Replication value could be used to decide who gets recognized.

However, the main advantage of this approach is that it can be unilaterally implemented. There is nothing to stop a motivated individual with access to the right data on citations to estimate their own c(f,t) and begin posting the replication value of different studies on the web. The main difficulty is probably in coming up with an estimate of c(f,t) that is convincing to the research community.

But that’s why I called this proposal not-so-modest.

Categories
Uncategorized

Autonomous Innovation with Neural Networks?

Earlier this year, an essay by Gary Marcus threw some cold water on the idea that humanity is on the brink of general artificial intelligence. Specifically, Marcus targeted Deep Learning, the method underlying modern neural networks. Marcus makes a series of critiques, but one prominent one (discussed earlier) is that neural networks have abysmal judgment for problems that lie outside the borders of their training set. This doesn’t mean they can’t be very good inside their training set. But give them a problem not well approximated by a mix of the features of examples they are trained on, and they fall to pieces.

On this (and other) dimensions, neural networks fare worse than human beings. But how important is this? In this post, I want to think a little about how far you can get with purely “interpolative innovation.” By interpolative innovation, I mean innovation that consists in the discovery of new things that are “mixes” of preexisting thing. A lot of innovation falls under this category.

A nice 2005 paper by economist Ola Olsson serves as a roadmap. Olsson’s “Technological Opportunity and Growth,” published in the Journal of Economic Growth (paywall, sorry), makes no mention of neural networks. It was meant to be a paper of how innovation in general happens. But it nicely illustrates how the dynamics of primarily interpolative innovation might play out in the long run, and how interpolative innovation can come to look like it evades Marcus’ critique: it (seemingly) breaks free of its training set.

The Technology Space

Olsson asks us to imagine a highly-dimensional space. Each of the dimensions in this space corresponds to some kind of attribute that an idea might have, and which might be measured by a number. Olsson suggests dimensions could correspond to things like “complexity”, “abstractness”, “mathematics”, “utility”, and so on. Scattered throughout this space are specific technologies, represented as points. Essentially, he asks us to imagine the human technological system as a cloud of points, where each point corresponds to a technology and a point’s position tells us about its features. Technologies with a lot of similar features are closely bunched together, and technologies with very different features will be distant.

Slide1Figure 1. Technology as a set of points… except with a lot more than two dimensions

This can be mapped into a neural network setting pretty easily. For a neural network to work with data, the features of the data need to be inputed to specific neurons as numbers. We can imagine those numbers correspond to positions along axes in Olsson’s technology space. Just as the set of technologies floats out there as a cloud of points in highly-dimensional space, so to does the set of training examples float out there in highly-dimensional space. Examples that have very similar features are close together, examples with very different features are far apart.

Innovation in the technological space

In Olsson’s model, all innovation is an interpolation between existing technologies. To begin, he defines incremental innovation as the discovery of a new technological point lying on a line connecting two technologies that are “close” together and which are already known. Essentially, we add new points to our cloud, but we can only add points in the spaces between existing technologies. However, as we innovate, we add new technologies, and these give us new possibilities for combination. If incremental innovation was all there was, then in the long run, we would eventually fill up all the gaps between technologies. In technical parlance, we would be left with the convex hull of technologies close enough to eventually be fully connected. The convex hull is the region such that no line drawn between points in it fall outside the region.

Slide2Figure 2. The convex hull of Figure 1. Again, pretend there are a lot more than 2 dimensions.

This is precisely Marcus’ critique of neural networks. They cannot extrapolate, and they cannot go beyond their training data. At best, they can recover the convex hull of their training set. Additionally, note that Olsson assumes it’s only possible to interpolate in between technologies that are already “close” in technological space. This is reminiscent of the way that neural networks need to be tuned to the kind of data they receive. For instance, the inputs to image classification neural nets differ dramatically from AlphaGo’s inputs, and the networks cannot transfer what they’ve learned in one domain to another (indeed, this is another of Marcus’ critiques). So we might imagine, in Olsson’s framework, that neural networks are only capable of interpolating between very similar (i.e., “close”) sets of technologies.

Olsson adds to his model the assumption that every once in awhile, purely by random chance, serendipitous discoveries are made. These are points in the technological space that simply appear on the scene. By chance, some of them will lie outside the convex hull described above. We can image these correspond to the incidents like Fleming’s lucky observation that a stray bit of penicillin mould had retarded the growth of bacteria in a petri dish. Or maybe they correspond to scientific anomalies in Thomas Kuhn’s sense.

So long as incremental innovation is feasible, researchers ignore these discoveries. However, at some point, incremental innovation begins to exhaust its possibilities. All possible combinations have been discovered. When this is the case, researchers turn their attention to these discoveries and engage in what Olsson calls radical innovation. He assumes this is riskier or more costly, and therefore the less favored choice of researchers. However, when incremental innovation is not possible, they have no choice but to turn to radical innovation.

Radical innovation is the creation of a new technology lying on a line between an existing technology and one of the serendipitous discoveries lying outside the convex hull. After much radical innovation, there are enough new technologies close enough to existing ones for incremental innovation to begin again. This time, the incremental innovation makes use of the technologies discovered by radical innovation. In this way, innovation proceeds, breaking free of its old paradigm by exploiting and developing serendipitous discoveries.

Again, this framework seems a good fit for neural networks. We can imagine radical innovation corresponds to retraining a neural network with additional examples that lie outside its previous training set. Just as Olsson assumes radical innovation to be in some sense harder than incremental innovation, retraining neural networks in this way seems harder than business as usual. In our last post, for example, it took several years before computer scientists figured out how to represent the styles of multiple painters in the same neural network. If we want to add new and distinctive examples to our training data, we may need to modify the network so that it can handle both the existing data and the new one. This kind of “radical” training set expansion is painful and time-consuming, but eventually enables neural networks to go to work interpolating new technologies between existing data and the new discoveries.

An Autopilot for Technological Progress

In Olsson’s model, incremental innovation proceeds as long as the paradigm remains productive (meaning all ideas in the convex hull have not been found). At the start of a paradigm, there are abundant opportunities for combining ideas. The returns to R&D are high. Over time, these ideas get fished out and the return on R&D falls. Throughout this period, random and serendipitous discoveries or anomalies are occasionally discovered, but they are left unexploited. As time goes on though, incremental innovation runs out (and this happens more quickly if there are more researchers). At that point, a period of difficult and groping R&D happens as firms engage in radical R&D. This requires interpolating between the known paradigm and the serendipitous discoveries. After some successes though, the convex hull is expanded and a period of incremental R&D starts anew.

Olsson meant to describe the cyclical nature of innovation by human civilization. But his model also provides an (admittedly speculative) blueprint for open-ended automated innovation by next generation neural networks. For as long as it’s valuable, the neural networks generate new discoveries by “filling in the gaps” of their training data with interpolations. Think of AlphaGo discovering better moves in Go, and style-transfer networks discovering new styles of painting, but also next generation neural networks discovering new molecules for drugs or material science, and new three-dimensional structures for buildings and vehicles (no links because I made those up).

And we could also automate non-incremental innovation. We could begin by programming the neural network to look for it’s own serendipitous discoveries.  In Olsson’s model, these come about by random luck. But in a neural network, we could program them to occasionally try something completely random (and outside the boundaries of their training set). This will almost never work. But on the rare occasions when the neural net tries something outside its training set that actually works, it can incorporate this information into it’s next training set. It can interpolate between this discovery and its existing ideas, and in this way it can escape the cage of its training set.

The barriers

For this kind of autonomous technological progress to work (at least!) two problems would need to be solved. The first we have already alluded to. Neural networks are quite domain specific. There is no guarantee they can even “see” examples that lie outside their training data, especially if different features of that data are what’s relevant. Maybe, we could build neural networks who are trained on the specific task of putting new data into a form usable for old neural networks… but we are well outside my area of expertise here.  I have no idea. In any event, maybe humans can do that task (not that it would be easy!).

The other barrier is the nature of the feedback a neural network would receive. Neural networks tune their internal structure according to well-defined goals, whether that goal is winning at Go or matching the style of a painter while preserving the content of an image. A neural network trained to deliver useful technologies would need to be trained on how valuable its discoveries are. How would that be determined? The answer is not so clear. In some cases, it’s relatively easy. If the neural network is generating new drugs, we can run clinical trials and see how they fare. But what if we’re developing polymers for material science or three dimensional structure? We can rate these discoveries on various criteria, but they may have unexpected and unanticipated uses. An alternative would be to let the market decide: after all, technologies that are profitable are ones that consumers will pay a lot for, relative to production costs, and this seems to be closely related to the value of an idea. But this solution is not without its own problems. For example, it might lead the neural networks to develop baubles for the super-rich.

I don’t intend to resolve this issue here. Indeed, how best to incentivize human innovators to focus their efforts where it is most valuable is an open question (and one I’ll explore in later posts)! But this need not distract us too much; my main point is to illustrate that it is at least possible for innovation to go a very long way, even if it’s primarily interpolative.

However, just because something is possible, doesn’t mean it’s a good idea. Might there be better ways to innovate? At the end of the day, neural networks are only one way to represent regularities in nature. In upcoming posts, we’ll discuss some of the others.

 

Categories
Uncategorized

Neural Networks and Old Regularities

In my last post, I wrote about the possibility that neural networks can represent new regularities in nature. These regularities are impossible to concisely represent with the kinds of representations humans are comfortable with: chiefly rules, probabilistic statements, and metaphor. This can make their pronouncements seem eerie and magical. But neural networks (hereafter NNs) are nothing if not flexible, and can also represent old and familiar regularites. These are the type we can translate into easier-to-digest formats. And this is another avenue in which they can innovate.

To explore this, let’s talk about NNs copying famous painters.

For a variety of reasons, a lot of computer scientists are interested in teaching NNs to paint. Well, more precisely, how to generate images that apply a given artistic style to any photo. One of the early papers in this area is Gatys, Ecker, and Bethge (2015) (real “early” right?). In figure 1, they apply the style of Van Gogh’s Starry Night to a photograph of canal houses.

Figure 1 - row houses and starry nightFigure 1. Applying Van Gogh to an image via neural networks. Source: Gatys, Ecker, and Bethge (2015)

In contrast to the baffling genius of AlphaGo, this is a case where we can understand what the NN is doing. Copying the style of Van Gogh is not that mysterious. People do it all the time. Here’s a lovely painting of trees by Sylvie Burr.

Figure 2 - TreesFigure 2. Trees in the style of Van Gogh (creative commons, by Sylvie Burr)

This is not a case where regularities are mysterious and defy explanation. The regularities that characterize Van Gogh’s style include a texturing of thick swirling lines and a moody (rather than realistic) color palette. Show us examples of his style and even someone who has never seen his work will begin to pick out commonalities.

Neural Network Representations in images

NNs are also capable of representing those regularities. But, in an emerging theme, the way it represents those regularities is opaque. We can’t “tell” it what regularities characterize Van Gogh’s style. Instead, we give it lots of examples and let it rediscover these regularities on its own.

However, before we go on, we have to talk about a second set of regularities that a NN has to represent to transfer style. These are regularities in the content and perspective of images done in different styles. In Figure 1, we can tell that both images correspond to the same subject matter and view. They only differ in their styles. In contrast, figure 2 and the right-hand side of figure 1 have the same style (sort of), but clearly depict different subject matter. The NN has to represent both regularities in style and content.

It does this in different ways (I am drawing on this, and this for this section). For the computer scientists, “style” is understood as a form of diffuse texture. It is the curving lines and color palette of Van Gogh, not the composition and choice of subject (in this respect, they miss a lot). When they train a NN to match the style of a painter, the “style” of the image is converted into numbers corresponding to non-localized regularities over the whole surface of the image. For example, in figure 1, it doesn’t care much about making the left-hand side of the image dark (to match the mountain spire of Starry Night). Instead, it cares about matching the thick wavy lines and color palette of the entire image.  By assessing how much the numerical score of the NN’s style differ from that of the example, it can be evaluated. The NN’s weights, links, and thresholds (link to post 1) are tweaked in pursuit of this style target.

So much for style. To match the subject matter of a NN, the evaluation is done with respect to large-scale local regularities. In figure 1, it wants to see a recognizable sky on top, row houses in the middle, and water on the bottom.

Recognizing that two images are of the same subject and perspective, even if all their pixels are different, is closely related to the problem image classification. Image classification problems includes facial recognition (realizing two different images correspond to the same face) and labeling image content (as corresponding to, say, dogs, or cats, or “inappropriate content”). In all cases, we want to match regularities in the relative position of large chunks of visual image. For example, if we’re identifying faces, we might be comparing the relative size of the “nose” to the “mouth” and “eyes” (although actually we don’t know what the NNs are doing).

Identifying regularities in images with different styles is closely related to the image classification problem. So computer scientists actually borrow the NN representation of these image classification problems! In a literal sense, they start with the hidden layers of NNs trained on image recognition (including the nodes, links, weights, and thresholds of the NN) and simply copy them over to the style-transfer NN. And recall, we don’t really “understand” what the NN is doing to classify images. Here, the fact that we don’t “understand” means we struggle to translate what the NN is doing into metaphors and rules. But it’s not necessary for us to do that. The representation encoded by the NN still does what we want a representation to do: it conveys information about a regularity in nature. We don’t really “understand” the NN’s internal representation of the regularities in an image, but that doesn’t stop us from redeploying that representation in a new context. Does it reliably identify image content? Great, that’s all we care about!

Is this really innovation?

So, NNs are capable of representing regularities in image style and content in such a way that styles can be swapped and content retained. By my definition, this is an innovation: the NN has stepped into the unknown and exploited regularities to generate something a lot more “interesting” than a random collection of pixels. But it’s fair to say these innovations are not world-changing. Indeed, they can fairly be described as derivative. An artist who only copied other artists’ styles wouldn’t be described as innovative, even if he did apply those styles to new contexts.

This is related to a critique of NNs by NYU psychologist Gary Marcus:

In general, the neural nets I tested could learn their training examples, and interpolate to a set of test examples that were in a cloud of points around those examples in n-dimensional space (which I dubbed the training space), but they could not extrapolate beyond that training space. (p. 16)

Put another way, NNs are good at combining aspects of what they are trained on (the content of pictures, the style of painters), but they are always trapped in the cage of these examples. A NN trained in the above manner won’t ever take us beyond Van Gogh.

But this is not as much of a shortcoming as it seems. The ability to usefully combine aspects of disconnected things is, in fact, one of the fundamental creative acts. Indeed, Keith Sawyer (who wrote the book on creativity) defines innovation as “a new mental combination that is expressed in the world.” (pg. 7) I’ll briefly give examples in three different domains.

  • In art, borrowing and recombining ideas can be seen in super-literal forms. Think like Pride and Prejudice and Zombies, and mashup artists like GirlTalk. But it’s also there, just below the surface, in things that like Star Wars.
  • Earlier, I asserted all technologies are combinations of pre-existing components. The internal combustion engine is one clear example. The modern combustion engine is built from a dizzying set of components that were often pioneered elsewhere. To take two examples, crankshafts and flywheels together convert uneven back-and-forth motion of a piston into smoothly rotating energy. Crankshafts had previously been employed to transform the rotational motion in waterwheels and windmills into back-and-forth motion. And flywheels had long been used to give potter’s wheels smooth and continuous motion. (Dartnell, pg. 201-207)
  • Lastly, the product of sexual reproduction is of course a new organism that draws on a mix of genes from each of its parents. Over time, this mixing, matching, and selection generates entirely new species.

The difference between the above and what NNs are doing is a different of degree, rather than a difference of kind. Most of the innovation done by humans and nature is also bound by “the training space” of available examples.

Expanding the Training Set

The difference is that humans and nature have a vastly, vastly more diverse storehouse of training examples than NNs. It’s not possible for a NN trained to reproduce the style of Van Gogh to go beyond him, because the only examples of painting it has are those of Van Gogh. To develop a new style, it would need examples drawn from other styles of painting at a minimum. More importantly, to really generate something we’ve never seen before, the NN would need the capacity to interpolate between different styles. Is this possible?

Yes, and it’s been done. Dumoulin, Shlens, and Kudlur (2017) from Google Brain trained a single NN to transfer the styles of 32 different artists to new images. Because the same NN represented all these different styles, the NN is also capable of applying interpolations of their styles to images. Figure 3 is an example from their paper:

Figure 3 - Interpolative StyleFigure 3. Combining the styles of different painters (figure 7 from Dumoulin, Shlens, and Kudlur 2017)

In this figure, the style of Starry Night has been applied to a picture of Brad Pitt’s face in the upper left corner. Head of a Clown by Georges Rouault is the upper right style, The Scream by Edvard Munch is the lower left style, and Bicentennial Print by Roy Lichtenstein is the lower right style. In between we have interpolations between the different styles. Subsequent work by Ghiasi et al (2017) (a group that includes the same team as above) generalized these techniques to a much wider set of painting styles.

This work shows its possible for NNs to develop styles that did not previously exist. Are they any good? In the small set of examples given in Figure 3, I tend to like the interpolations between Rouault and Lichtenstein more than I like pure Rouault Pitt and pure Lichtenstein Pitt. But the main point of this post is simply to show NNs can innovate, even when they are using the kinds of regularities that humans are able to understand.

Now, what I am not claiming is that NNs can match humans in our ability to combine and interpolate between different ideas. Reading these papers, it’s clear that representing the different styles of painting in a NN was a major technical challenge that took considerable work to implement. Worse, their solution to this problem cannot be applied to problems and data different from the painting-style problem, at least without considerable modification (and maybe not even then). It is going to be a long time before a single NN can combine ideas and concepts from vastly different domains like us. But on the other hand, a lot of progress has been made in just 3 years.