I’ve been trying to understand what motivates me. Sometimes I literally fall in love with some area of learning. I think about little else for months or a year and try to spend every waking minute practicing or learning the thing. In particular, I try to eke out even just minutes at a time wherever possible, and I don’t notice feeling tired (even when objectively my performance declines from the lack of sleep). At other times I at least have no trouble concentrating on a task for ten hours almost nonstop even if it doesn’t feel particularly thrilling. Finally, there are many tasks that I have trouble focusing on for even an hour.
This happens involuntarily and is imperfectly aligned with what I consider to be most important – I might call these things System 1 and System 2 motivation. So I want to understand better how to create the sorts of conditions that allow me to be more System 1–motivated.
One factor that I think I’ve identified and that seems somewhat universal (so is not only relevant for me) is cognitive dissonance. When I feel strong cognitive dissonance, it motivates me to investigate the topic. This takes different forms:
- Moral dilemmas. These are morally controversial topics. Sometimes the moral dilemmas aren’t, but so long as that hasn’t been established, I’d like to group such pseudo-dilemmas under this rubric too. The frequent use of moral dilemmas in fiction and the popularity of trolley problem memes indicates that I’m not alone with this. And it makes societal sense to work hard to resolve these things.
- Taboos. These are “socially controversial” for reasons of, say, signaling group membership or for fear of overstepping one’s social status. But it’s hard to find something morally bad with engaging with them, or whatever morally bad things they tend to come with are avoidable (other than opportunity costs).
- Empirically controversial topics. These are topics that people have widely different empirical beliefs about. (But only to a degree where they – or I at least – can’t outright dismiss the other view.)
- Imperfectly understood topics. These are topics where the experimental data contradicts the predictions of all known theories.
- Unaesthetic resolutions. These may be problems whose solutions strike me as inelegant, partial, ad-hoc, or disproportionate.
In addition to the cognitive dissonance, flow is important too. I think flow becomes possible when the cognitive dissonance is only on one (or very few) abstraction layers at a time.
It can be flowy to learn a new programming language with different paradigms and conventions from the ones I know while writing simple examples in the language. But learning such a very different language while working on a complex software written in it whose workings I don’t understand either may already feel less flowy. It can get downright frustrating when the complex software was written by many people with very different levels of seniority, so that I need to constantly evaluate what might make sense within the paradigm I don’t understand yet and what I should refactor thoroughly.
The second one may be a quirk of mine. I’ve found that some smart friends of mine seemed to have learned a lot from unreliable teachers without adopting the teacher’s mistakes. I also have this quirk where I enjoy things more the more they have the shape where you can immediately forget all examples because you can rederive them any time. These may or may not apply to you.
I haven’t tested these yet. They are mostly habits or tricks that I’ve observed but never tried because I didn’t understand them. Now I think I understand them better. I’ll update the list once I have first-hand experience with them.
- Thinking first, reading second. Christian Tarsney told me once at a conference that Hilary Greaves has this tip for researchers that they shouldn’t start learning about a topic by reading all the literature on it but that they should start by trying to figure it all out by themselves. (This is some second-level telephone game, so it’s well possible she said something else entirely. Sorry if that’s the case!) That seemed interestingly counterintuitive to me. One benefit that came to mind is that maybe all established researchers are caught in some local optimum of a theory that they fleshed out in such detail that it now works better for them than anything else. So a different theory, fleshed out even to a tenth of that level of detail, would perform better. But from the vantage point of the local optimum that theory looks as inferior as all the other actually inferior theories. But there’s a second benefit: if you form an arbitrary detailed mental model from the start and only then let it clash with reality, the cognitive dissonance may be motivating.
- Tests. There is no way to generate cognitive dissonance if you can’t test your theory against reality. Extroverts will enjoy conversation as a universal test framework. Introverts can converse with friends, mentally simulate dialogs, monologue toward real or imagined readers, learn formalisms that expose internal inconsistencies, analyze freely available data, or write software (if it doesn’t work, you did it wrong). Please comment if you know more testing methods.
- Opinionatedness. I’ve known a few smart people who are very opinionated, not in the sense that they fail to update away from wrong opinions because their priors are too strong but in the sense that whatever new thing you throw at them, it takes split seconds before they have a clear opinion on it (which they are happy to dismiss a minute later). My experience, on the other hand, is usually that for me tons of arguments in all directions come to mind, and I need to spend a lot of time assigning the right weights to them and grappling with interdependencies before any fledgling feeling of an opinion emerges. Having a random opinion from the start may generate strong cognitive dissonance (because it’s probably not fully correct), and that may keep you motivated to hone or revise your opinion while I may lose interest halfway through assigning all those weights. For bonus points, you can make it as empirically controversial as possible. (Morally or socially controversial random theories probably come with too costly externalities to be worth it.)
- Practice. Practice can automate a lot of low-level motions so that one’s focus can rest on just that one abstraction layer where the cognitive dissonance is strongest. That enables flow.
- Experience. Experience leads to streams of ideas rather than trickles. If I have to wait for new ideas for days or weeks, there’s no flow, but if I have a fresh idea every thirty seconds even before my latest one has even failed, or I have a whole backlog of ideas to try, there’s a lot more flow.
- Confirmation bias. Maybe confirmation bias is motivating at first because it leads to a random clear model. Then, of course, the confirmation bias needs to be turned off again. Not sure who can do that.