You can find the whole question thread in the EA Forum. This is a lightly edited transposition of the thread, that is, this article puts each question first and then lists all answers. This way, the table of contents allows you to jump to whichever question interests you most.
- Thinking vs. reading
- Is there something interesting here?
- Survival vs. exploratory mindset
- Optimal hours of work per day
- Learning a new field
- Hard problems
- Emotional motivators
- Typing speed
- Obvious questions
- Tiredness, focus, etc.
If you want to research a particular topic, how do you balance reading the relevant literature against thinking yourself and recording your thoughts? I’ve heard second-hand that Hilary Greaves recommends thinking first so to be unanchored by the existing literature and the existing approaches to the problem. Another benefit may be that you start out reading the literature with a clearer mental model of the problem, which might make it easier to stay motivated and to remain critical/vigilant while reading. (See this theory of mine.) Would you agree or do you have a different approach?
I think it depends on the context. Sometimes it makes sense to lean toward thinking more and sometimes it makes sense to lean toward reading more. (I wouldn’t advise focusing exclusively on one or the other.) Unjustified anchoring is certainly a worry, but I think reinventing the wheel is also a worry. One could waste two weeks groping toward a solution to a problem that could have been solved in afternoon just by reading the right review article.
Another benefit of thinking before reading is that it can help you develop your research skills. Noticing some phenomena and then developing a model to explain it is a super valuable exercise. If it turns out you reproduce something that someone else has already done and published, then great, you’ve gotten experience solving some problem and you’ve shown that you can think through it at least as well as some expert in the field. If it turns out that you have produced something novel then it’s time to see how it compares to existing results in the literature and get feedback on how useful it is.
This said, I think this is more true for theoretical work than applied work, e.g. the value of doing this in philosophy > in theoretical economics > in applied economics. A fair amount of EA-relevant research is summarising and synthesising what the academic literature on some topic finds and it seems pretty difficult to do that by just thinking to yourself!
I don’t think I really have explicit policies regarding balancing reading against thinking myself and recording my thoughts. Maybe I should.
I’m somewhat inclined to think that, on the margin and on average (so not in every case), EA would benefit from a bit more reading of relevant literatures (or talking to more experienced people in an area, watching of relevant lectures, etc.), even at the expense of having a bit less time for coming up with novel ideas.
I feel like EA might have a bit too much a tendency towards “think really hard by oneself for a while, then kind-of reinvent the wheel but using new terms for it.” It might be that, often, people could get to similar ideas faster and in a way that connects to existing work better (making it easier for others to find, build on, etc.) by doing some extra reading first.
Note that this is not me suggesting EAs should increase how much they defer to experts/others/existing work. Instead, I’m tentatively suggesting spending more time learning what experts/others/existing work has to say, which could be followed by agreeing, disagreeing, critiquing, building on, proposing alternatives, striking out in a totally different direction, etc.
(On this general topic, I liked the post The Neglected Virtue of Scholarship.)
Less important personal ramble:
I often feel like I might be spending more time reading up-front than is worthwhile, as a way of procrastinating, or maybe out of a sort-of perfectionism (the more I read, the lower the chance that, once I start writing, what I write is mistaken or redundant). And I sort-of scold myself for that.
But then I’ve repeatedly heard people remark that I have an unusually large amount of output. (I sort-of felt like the opposite was true, until people told me this, which is weird since it’s such an easily checkable thing!) And I’ve also got some feedback that suggested I should move more in the direction of depth and expertise, even at the cost of breadth and quantity of output.
So maybe that feeling that I’m spending too much time reading up-front is just mistaken. And as mentioned, that feeling seems to conflict with what I’d (tentatively) tend to advise others, which should probably make me more suspicious of the feeling. (This reminds me of asking “Is this how I’d treat a friend?” in response to negative self-talk [source with related ideas].)
I’ve been playing around with spending 15–60 min. sketching out a quick model of what I think of something before starting in on the literature (by no means a consistent thing I do though). I find it can be quite nice and help me ask the right questions early on.
I imagine that virtually any research project, successful and unsuccessful, starts with some inchoate thoughts and notes. These will usually seem hopelessly inadequate but they’ll sometimes mature into something amazingly insightful. Have you ever struggled with mental blocks when you felt self-conscious about these beginnings, and have you found ways to (reliably) overcome them?
Yep, I am intimately familiar with hopelessly inchoate thoughts and notes. (I’m not sure I’ve ever completed a project without passing through that stage.) For me at least, the best way to overcome this state is to talk to lots of people. One piece of advice I have for young researchers is to come to terms with sharing your work with people you respect before it’s polished. I’m very grateful to have a large network of collaborators willing to listen to and read my confused ramblings. Feedback at an early stage of a project is often much more valuable than feedback at a later stage.
Personally, I’m very self-conscious about my work and tend to wait to long to share it. But the culture of RP seems to fight that tendency – which I think is very productive!
Idk if this fits exactly but when I started my research position I tried to have the mindset of, “I’ll be pretty bad at this for quite a while.” Then when I made mistakes I could just think, “right, as expected. Now let’s figure out how to not do that again.” Not sure how sustainable this is but it felt good to start! In general it seems good to have a mindset of research being nearly impossibly hard. Humans are just barely able to do this thing in a useful way and even at the highest levels academics still make mistakes (most papers have at least some flaws).
This is his answer to the questions about self-consciousness and “Is there something interesting here?”
These questions definitely resonate with me, and I imagine they’d resonate with most/all researchers.
I have a tendency to continually wonder if what I’m doing is what I should be doing, or if I should change my priorities. I think this is good in some ways. But sometimes I’d make better decisions faster if I just actually pursued an idea more “confidently” for a bit, to get more info on whether it’s worth pursuing, rather than just “wondering” about it repeatedly and going back and forth without much new info to work with. Basically, I might do too much self-doubt-style armchair reasoning, with too little actual empirical info.
Also, pursuing an idea more “confidently” for a bit will not only inform me about whether to continue pursuing it further, but also might result in outputs that are useful for others. So I try to sometimes switch into “just commit and focus mode” for a given time period, or until I hit a given milestone, and mostly minimise reflection on what I should prioritise during that time. But so far this has been like a grab bag of heuristics and habits I use, rather than a more precise guideline for myself.
See also When to focus and when to re-evaluate.
Things that help me with this include, and/or some scattered related thoughts, include:
- Talking to others and getting feedback, including on early-stage ideas
- I liked David and Jason’s remarks on this in their comments
- A sort-of minimum viable product and quick feedback loop approach has often seemed useful for me – something like:
- First getting verbal feedback from a couple people on a messy, verbal description of an idea
- Then writing up a rough draft about the idea and circulating it to a couple more people for a bit more feedback
- Then polishing and fleshing out that draft and circulating it to a few more people for more feedback
- Then posting publicly
- (But only proceeding to the next step if evidence from the prior one – plus one’s own intuitions – suggested this would be worthwhile)
- Feedback has often helped me determine whether an idea is worth pursuing further, feel more comfortable/motivated with pursuing an idea further (rather than being mired in unproductive self-doubt), develop the idea, work out which angles of it are most worth pursuing, and work out how to express it more clearly
- Reminding myself that I haven’t really gathered any new info since the last time I thought “Should this really be what I spend my time on?,” so thinking about that again is unlikely to reveal new insights, and is probably just a stupid part of my psychology rather than something I’d endorse.
- I might think to myself something like “If a friend was doing this, you’d think it’s irrational, and gently advise them to just actually commit for a bit and get new info, right? So shouldn’t you do the same yourself?”
- Remembering Algorithms to Live By drawing an analogy to a failure mode in which computers continually reprioritise tasks and the reprioritisation takes up just enough processing power to mean no actual progress on any of the tasks occurs, and this can just cycle forever. And the way to get out of this is to at some point just do tasks, even without having confidence that these “should” be top priority.
- This is just my half-remembered version of that part of the book, and might be wrong somehow.
- Remembering that I’d be deeply uncertain about the “actual” value of any project I could pursue, because the world is very complicated and my ambitions (contribute to improving the long-term future) are pretty lofty. The best I can do is something that seems good in expected value but with large error bars. So the fact I feel some uncertainty and doubt provides basically no evidence that this project isn’t worth pursuing. (Though feeling an unusually large amount of uncertainty and doubt might.)
- Remembering that, if the idea ends up seeming to have not been important but there was a reasonable ex ante case that it might’ve been important, there’s a decent chance someone else would end up pursuing it if I don’t. So if I pursue it, then find out it seems to not be important, then write about what I found, that might still have the effect of causing an important project to get done, because it might cause someone else to do that important project rather than doing something similar to what I did.
Examples to somewhat illustrate the last two points:
This year, in some so-far-unpublished work, I wrote about some ideas that:
- I initially wasn’t confident about the importance of
- Seemed like they should’ve been obvious to relevant groups, but seemed not to have been discussed by them. And that generally seems like (at least) weak evidence that an idea either (a) actually isn’t important or (b) has been in essence discussed in some other form or place that I just am not familiar with.
So when I had the initial forms of these ideas and wasn’t sure how much time (if any) to spend on them, I took roughly the following approach:
I developed some thoughts on some of the ideas. Then I shared those thoughts verbally or as very rough drafts with a small set of people who seemed like they’d have decent intuitions on whether the ideas were important vs unimportant, somewhat novel vs already covered, etc.
In most cases, this early feedback indicated that it was at least plausible that the ideas were somewhat important and somewhat novel. This – combined with my independent impression that these ideas might be somewhat important and novel – seemed to provide sufficient reason to flesh those ideas out further, as well as to flesh out related ideas (which seemed like they’d probably also be important and novel if the other ideas were, and vice versa).
So I did so, then shared that slightly more widely. Then I got more positive feedback, so I bothered to invest the time to polish the writings up a bit more.
Meanwhile, when I fleshed one of the ideas out a little, it seemed like that one turned out to probably not be very important at all. So with that one, I just made sure that my write-up made it clear early on that my current view was that this idea probably didn’t matter, and I neatened up the write-up just a bit, because I still thought the write-up might be a bit useful either to:
- Explain to others why they shouldn’t bother exploring the same thing
- Make it easy for others to see if they disagreed with my reasoning for why this probably didn’t matter, because I might be wrong about that, and it might be good for others to quickly check that reasoning
Having spent time on that idea sort-of felt in hindsight silly or like a mistake. But I think I probably shouldn’t see that as having been a bad decision ex ante, given that:
- It seems plausible that, if not for my write-up, someone else would’ve eventually “wasted” time on a similar idea
- This was just one out of a set of ideas that I tried to flesh out and write up, many/most of which still (in hindsight) seem like they were worth spending time on
- So maybe it’s very roughly like I gave 60% predictions for each of 10 things, and decided that that’d mean the expected value of betting on those 10 things was good, and then 6 of those things happened, suggesting I was well-calibrated and was right to bet on those things
- (I didn’t actually make quantitative predictions)
And some of the other ideas were in between – no strong reason to believe they were important or that they weren’t – so I just fleshed them out a bit and left it there, pending further feedback. (I also had other things to work on.)
In a reply, I referred to this related blog post of mine. Michael replied:
It’s also important to be transparent about one’s rigor and to make the negative results findable for others. The second is obvious. The first is because the dead end may not actually be a dead end but only looked that way given the particular way in which you had resolved the optimal stopping problem of investigating it (even) further.
I agree with these points, and think that they might sometimes be under-appreciated (both in and outside of EA).
To sort-of restate your points:
- I think it’s common for people to not publish explorations that turned out to seem to “not reveal anything important” (except of course that this direction of exploration might be worth skipping).
- Much has been written about this sort of issue, and there can be valid reasons for that behaviour, but sometimes it seems unfortunate.
- I think another failure mode is to provide some sort of public info of your belief that this direction of exploration seems worth skipping, but without sufficient reasoning transparency, which could make people rule this out too much/too early.
- Again, there can be valid reasons for this (if you’re sufficiently confident that it’s worth ruling out this direction and you have sufficiently high-value other things to do, it might not be worth spending time on a write-up with high reasoning transparency), but sometimes it seems unfortunate.
In the same context, I also brought up a bit of CFAR lore:
Part of this reminds me a lot of CFAR’s approach here (I can’t quite tell whether Julia Galef is interviewer, interviewee, or both):
For example, when I’ve decided to take a calculated risk, knowing that I might well fail but that it’s still worth it to try, I often find myself worrying about failure even after having made the decision to try. And I might be tempted to lie to myself and say, “Don’t worry! This is going to work!” so that I can be relaxed and motivated enough to push forward.
But instead, in those situations I like to use a framework CFAR sometimes calls “Worker-me versus CEO-me.” I remind myself that CEO-me has thought carefully about this decision, and for now I’m in worker mode, with the goal of executing CEO-me’s decision. Now is not the time to second-guess the CEO or worry about failure.
Your approach to gathering feedback and iterating on the output, making it more and more refined with every iteration but also deciding whether it’s worth another iteration, that process sounds great!
I think a lot of people aim for such a process or want to after reading your comment, but will held back from showing their first draft to their first round of reviewers because they worry the reviewers will think badly of them for addressing a topic of this particular level of perceived difficulty or relevance (maybe it’s too difficult or too irrelevant in the reviewer’s opinion), or think badly of them for a particular wording, or think badly of them because they think you should’ve anticipated a negative effect of writing about the topic and not done so (e.g., some complex acausal trade or social dynamics thing that didn’t occur to you), or just generally have diffuse fears holding them back. Such worries are probably disproportionate, but still, overcoming them will probably require particular tricks or training.
I like that “Worker-me versus CEO-me” framing, and hadn’t heard of it or seen that page, so thanks for sharing that. It does seem related to what I said in the parent comment.
I share the view that it’ll be decently common for a range of disproportionate worries to hold people back from striking out into areas that seem good in expected value but very uncertain and with real counterarguments, and from sharing early-stage results from such pursuits. I also think there can be a range of good reasons to hold back from those things, and that it can be hard to tell when the worries are disproportionate!
I imagine it’d be hard (though not impossible) to generate advice on this that’s quite generally useful without being vague/littered with caveats. People will probably have to experiment to some extent, get advice from trusted people on their general approach, and continue reflecting, or something like that.
I often have some (for me) novel ideas, but then it turns out that whether true or false, the idea doesn’t seem to have any important implications. Conversely, I’ve dismissed ideas as unimportant, and years later someone developed them – through a lot of work I didn’t do because I thought it wasn’t important – into something that did connect to important topics in unanticipated ways. Do you have rules of thumb that help you assess early on whether a particular idea is worth pursuing?
Yep, this also happens to me. Unfortunately, I don’t have any particular insight. Oftentimes the only way to know whether an idea is interesting is to put in the hard exploratory work. Of course, one shouldn’t be afraid to abandon an idea if it looks increasingly unpromising.
I mostly try to work out how excited I am by this idea and whether I could see myself still being excited in 6 months, since for me having internal motivation to work on a project is pretty important. I also try to chat about this idea with various other people and see how excited they are by it.
See section Self-Consciousness.
I’ve heard of the distinction between survival mindset and exploratory mindset, which makes intuitive sense to me. (I don’t remember where I learned of these terms, but I tried to clarify how I use them in a comment below.) I imagine that for most novel research, exploratory mindset is the more useful one. (Or would you disagree?) If it doesn’t come naturally to you, how do you cultivate it?
By survival mindset I mean: extreme risk aversion, fear, distrust toward strangers, little collaboration, isolation, guarded interaction with others, hoarding of money and other things, seeking close bonds with family and partners, etc., but I suppose it also comes with modesty and contentment, equanimity in the face of external catastrophes, vigilance, preparedness, etc.
By exploratory mindset I mean: risk neutrality, curiosity, trust toward strangers, collaboration, outgoing social behavior, making oneself vulnerable, trusting partners and family without much need for ritual, quick reinvestment of profits, etc., but I suppose also a bit lower conscientiousness, lacking preparedness for catastrophes, gullibility, overestimating how much others trust you, etc.
Those categories have been very useful for me, but maybe they’re a lot less useful for most other people? You can just ignore that question if the distinction makes no intuitive sense this way or doesn’t quite fit your world models.
Insofar as I understand the terms, an exploratory mindset is an absolute must. Not sure how to cultivate it, though.
I also haven’t heard these terms before, but from your description (which frames a survival mindset pretty negatively), an exploratory mindset comes fairly naturally to me and therefore I haven’t ever actively cultivated it. Lots of research projects fail so extreme risk aversion in particular seems like it would be bad for researchers.
Have you found that a particular number of hours of concentrated work per day works best for you? By this I mean time you spend focused on your research project, excluding time spent answering emails, AMAs, and such. (If hours per day doesn’t seem like an informative unit to you, imagine I asked “hours per week” or whatever seems best to you.)
I tend to work about 4–7 hours per day including meetings and everything. Including only mentally intensive tasks I probably get around 4–5 a day. Sometimes I’m able to get more if I fall into a good rhythm with something. Looking around at estimates (Rescuetime says just ~ 3 hours per day average of productive work) it seems clear I’m hitting a pretty solid average. I still can’t shake the feeling that everyone else is doing more work. Part of this is because people claim they do much more work. I assume this is mostly exaggeration though because hours worked is used as a signal of status and being a hard worker. But still, it’s hard to shake the feeling.
I typically aim for 6–7 hours of deep work a day and a couple of dedicated hours for miscellaneous tasks and meetings. Since starting part-time at RP I’ve been doing 6 days a week (2 RP, 4 PhD), but before that I did 5. I find RP deep work less taxing than PhD work. 6 days a week is at the upper limit of manageable for me at the moment, so I plan to experiment with different schedules in the new year.
I work between 4 and 8 hours a day. I don’t find any difference in my productivity within that range, though I imagine if I pushed myself to work more than 8, I would pretty quickly hit diminishing returns.
I don’t know what I mean by “field,” but probably something smaller than “biology” and bigger than “how to use Pipedrive.” If you need to get up to speed on such a field for research that you’re doing, how do you approach it? Do you read textbooks (if so, linearly or more creatively?) or pay grad students to answer your questions? Does your approach vary depending on whether it’s a subfield of your field of expertise or something completely new?
I can answer [this], as I’ve been doing it for Wild Animal Welfare since I was hired in September. WAW is a new and small field, so it is relatively easy to learn the field, but there’s still so much! I started by going backwards (into the Welfare Biology movement of the 80s and 90s) and forwards (into the WAW EA orgs we know today) from Brain Tomasik, consulting the primary literature over various specific matters of fact. A great thing about WAW being such a young field (and so concentrated in EA) is that I can reach out to basically anyone who’s published on it and have a real conversation. It’s a big shortcut!
I should note that my background is in Evolutionary Biology and Ecology, so someone else might need a lot more background in those basics if they were to learn WAW.
I just do a lot of literature review. I tend to search for the big papers and meta-analyses, skim lot’s of them and try to make a map of what the key questions are and what the answers proposed by different authors are for each question (noting citations for each answer). This helps to distill the field I think and serves as something relatively easy to reference. Generally there’s a lot of restructuring that needs to happen as you learn more about a topic area and see that some questions you used were ill-posed or some papers answer somewhat different questions. In short this gets messy, but it seems like a good way to start and sometimes it works quite well for me.
I don’t know if I have a great, well-chosen, or transferable method here, so I think people should pay more attention to my colleagues’ answers than mine. But FWIW, I tend to do a mixture of:
- reading Wikipedia articles
- reading journal article abstracts
- reading a small set of journal articles more thoroughly
- listening to podcasts
- listening to audiobooks
- watching videos (e.g., a Yale lecture series on game theory)
- talking to people who are already at least sort-of in my network (usually more to get a sounding board or “generalist feedback,” rather than to leverage specific expertise of theirs)
I’ve also occasionally used free online courses, e.g. the Udacity Intro to AI course. (See also What are some good online courses relevant to EA?)
Whether I take many notes depends on whether I’m just learning about a field because I think it might be useful in some way in future for me to know about that field, or because I have at least a vague idea of a project I might work on within that field (e.g., “how bad would various possible types of nuclear wars be, from a longtermist perspective?”). In the latter case, I’ll take a lot of notes as I go in Roam, beginning to structure things into relevant sub-questions, things to learn more about, etc.
Since leaving university, I haven’t really made much use of textbooks, flashcards, or reaching out to experts who aren’t already in my network. It’s not really that I actively chose to not make much use of these things (it’s just that I never actively chose to make much use of these things), and think it’s plausible that I should make more use of these things. I’ll very likely talk to a bunch of experts for my current or upcoming research projects.
I’m a big fan of textbooks and schedule time to read a couple of textbook chapters each week. LessWrong’s best textbooks on every subject thread is pretty good for finding them. I usually make Anki flashcards to help me remember the key facts, but I’ve recently started experimenting with Roam Research to take notes which I’m also enjoying so my “learning flow” is in flux at the moment.
I can’t emphasize enough the value of just talking to existing experts. For me at least, it’s by far the most efficient way to get up-to-speed quickly. For that reason, I really value having a large network of diverse people I can contact with questions. I put a fair amount of effort into cultivating such a network.
I imagine that you’ll sometimes have to grapple with problems that are sufficiently hard that it feels like you didn’t make any tangible progress on them (or on how to approach them) for a week or more. How do you stay optimistic and motivated? How and when do you “escalate” in some fashion – say, discuss hiring a freelance expert on some other field?
I’m fortunate that my work is almost always intrinsically interesting. So even if I don’t make progress on a problem, I continue to be motivated to work on it because the work itself is so very pleasant. That said, as I’ve emphasized above, when I’m stuck, I find it most helpful to talk to lots of people about the problem.
I have a maybe-controversial take that research (even in LT space) is motivated largely by signalling and status games. From this view the advice many gave about talking to people about it sounds good. Then you generate some excitement as you’re able to show someone else you’re smart enough to solve it, or they get excited to share what they know, etc. I think if you had a nice working group on any topic, no matter how boring, everyone would get super excited about it. In general, connecting the solution to a hard problem to social reward is probably going to work well as a motivator by this logic.
I’m not actually sure if the precise problem you’re describing resonates with me. I definitely often feel very uncertain about:
- whether the goal I’m striving towards really matters at all
- even if so, whether it’s a goal worth prioritising
- whether I should prioritise it (is it my comparative advantage?)
- whether anything I produce in pursuing this goal will be of any use to anyone
But I’m not sure there have been cases where, for a week or more, I didn’t feel like I was at least progressing towards:
- having the sort of output I had planned or now planned to produce(setting aside the question of whether that output will be useful to anyone), and/or
- deciding (for good reason) to not bother trying to create that sort of output
Note that I’d count as “progress” cases where I explored some solutions/options that I thought might work/be useful for X, and all turned out to be miserable wastes of time, so I can at least rule those out and try something else next week. I’d also count cases where I learned other potentially useful things in the process of pursuing dead ends, and that knowledge seems likely to somehow benefit this or other projects.
It is often the case that my estimate of how many remaining days something will take is longer at the end of the week than it was at the beginning of the week. But this is usually coupled with me thinking that I have made some sort of progress – I just also realised that some parts will be harder than I thought, or that I should do a more thorough job than I’d planned, or something like that.
(But I feel like maybe I’m just interpreting your question differently to what you intended.)
It’s easy to be motivated on a System 2 basis by the importance of the work, but sometimes that fails to carry over to System 1 when dealing with some very removed or specific work – say, understanding some obscure proof that is relevant to AI safety along a long chain of tenuous probabilistic implications. Do you have tricks for how to stay System 1 motivated in such cases – or when do you decide that a lack of motivation may actually mean that something is wrong with the topic and you should question whether it is sufficiently important?
When I reflect on my life as a whole, I’m happy that I’m in a career that aims to improve the world. But in terms of what gets me out of bed in the morning and excited to work, it’s almost never the impact I might have. It’s the intrinsically interesting nature of my work. I almost certainly would not be successful if I did not find my research to be so fascinating.
My main trick for dealing with this is to always plan my day the night before. I let System 2 Dave work out what is important and needs to be done and put blocks in the calendar for these things. When System 1 Dave is working the next day, his motivation doesn’t end up mattering so much because he can easily defer to what System 2 Dave said he should do. I don’t read too much into lack of System 1 motivation, it happens and I haven’t noticed that it is particularly correlated with how important the work is, it’s more correlated with things like how scary it is to start some new task and irrelevant things like how much sunlight I’ve been getting.
I’ve been thinking a lot recently about what I’m calling “incentive landscaping.” The basic idea is that your system 2 has a bunch of things it wants to do (e.g. have impact). Then you can shape your incentive landscape such that your system 1 is also motivated to do the highest impact things. Working for someone who shares your values is the easiest way to do this as then your employer and peers will reward you (either socially or with promotions) for doing things which are impact-oriented. This still won’t be perfectly optimized for impact but it gets you close. Then you can add in some extra motivators like a small group you meet with to talk about progress on some thing which seems badly motivated, or ask others to make your reward conditional on you completing something your system 2 thinks is important. Still early days for me on this though and I think it’s a really hard thing to get right.
(Disclaimer: I’m just reporting on my own experience, and think people will vary a lot in this sort of area, so none of the following is even slightly a recommendation.)
- Personally, I seem to just find it pretty natural to spend a lot of hours per week doing work-ish things
- I tend to be naturally driven to “work hard” (without it necessarily feeling much like working) by intellectual curiosity, by a desire to produce things I’m proud of, and by a desire for positive attention (especially but not only from people whose judgement I particularly respect)
- That third desire in particular can definitely become a problem, but I try to keep a close eye on it and ensure that I’m channeling that desire towards actions I actually endorse on reflection
- I do get run down sometimes, and sometimes this has to do with too many hours per week for too many weeks in a row. But the things that seem more liable to run me down are feeling that I lack sufficient autonomy in what I do, how, and when; or feeling that what I’m doing isn’t valuable; or feeling that I’m not developing skills and knowledge I’ll use in future
- That last point means that one type of case in which I do struggle to be motivated is cases where I know I’m going to switch away from a broad area after finishing some project, and that I’m unlikely to use the skills involved in that project again.
- In these cases, even if I know that finishing that project to a high standard would still be valuable and is worth spending time on, it can be hard for me to be internally motivated to do so, because it no longer feels like doing so would “level me up” in ways I care about.
- I seem to often become intensely focused on a general area in an ongoing way (until something switches my focus to another area), and just continually think about it, in a way that feels positive or natural or flow-like or something
- This happened for stand-up comedy, then for psychology research, then for teaching, then for EA stuff (once I learned about EA)
- (The other points above likewise applied during each of those four “phases” of my adult life)
Luckily, the sort of work I do now:
- is very intellectually stimulating
- involves producing things I’m (at least often!) proud of
- can bring me positive attention
- allows me a sufficient degree of autonomy
- seems to me to be probably the most valuable thing I could realistically be doing at the moment (in expectation, and with vast uncertainty, of course)
- involves developing skills and knowledge I expect I might use in future
That means it’s typically been relatively easy for me to stay motivated. I feel very fortunate both to have the sort of job and “the sort of psychology” I’ve got. I think many people might, through no fault of their own, find it harder to be emotionally motivated to spend lots of hours doing valuable work, even when they know that that work would be valuable and they have the skills to do it. Unfortunately, we can’t entirely choose what drives us, when, and how.
(There’s also a scary possibility that my tendency so far to be easily motivated to work on things I think are valuable is just the product of me being relatively young and relatively new to EA and the areas I’m working in, and that that tendency will fade over time. I’d bet against that, but could be wrong.)
I have this pet theory that a high typing speed is important for some forms of research that involves a lot of verbal thinking (e.g., maybe not maths). The idea is that our memory is limited, so we want to take notes of our thoughts. But handwriting is slow, and typing is only mildly faster, so unless one thinks slowly or types very fast, there is a disconnect that causes continual stalling, impatience, forgotten ideas, and prevents the process from flowing. Does that make any intuitive sense to you? Do you have any tricks (e.g., dictation software)?
No idea what my typing speed is, but it doesn’t feel particularly fast, and that doesn’t seem to handicap me. I’ve always considered myself a slow thinker, though.
I struggle to imagine typing speed being a binding constraint on research productivity since I’ve never found typing speed to be a problem for getting into flow, but when I just checked my wpm was 85 so maybe I’d feel different if it was slower. When I’m coding the vast majority of my time is spent thinking about how to solve the problem I’m facing, not typing the code that solves the problem. When I’m writing first drafts, I think typing speed is a bit more helpful for the reasons you mention, but again more time goes into planning the structure of what I want to say and polishing, than the first pass at writing where speed might help.
I think my own belief is that typing speed is probably less important than you appear to believe, but I care enough about it that I logged 53 minutes of typing practice on keybr this year (usually during moments where I’m otherwise not productive and just want to get “in flow” doing something repetitive), and I suspect I still can productively use another 3–5 hours of typing practice next year even if it trades off against deep work time (and presumably many more hours than that if it does not).
At least when I’m doing reflections or broad thinking I often circumvent this by doing a lot of voice notes with Dragon. That way I can type at the speed of thought. It’s never perfect but ~ 97% of it is readable so it’s good enough. Then if you want to actually have good notes you go through and summarize your long jumble of semi-coherent thoughts into something decent sounding. This has the side of effect of some spaced repetition learning as well!
I’d be surprised if typing speed was a big factor explaining differences in how much different researchers produce, or in their ability to produce certain types of output. (But of course, that claim is pretty vague – how surprised would I be? What do I mean by “big factor?”)
But I just did a typing test, and got 92 WPM (with “medium” words, and 1 typo), which is apparently high. So perhaps I’m just taking that for granted and not recognising how a slower typing speed could’ve limited me. Hard to say.
Nate Soares has an essay on “obvious advice.” Michael Aird mentioned that in many cases he just wanted to follow up on some obvious ideas. They were obvious in hindsight, but evidently they hadn’t been obvious to anyone else for years. Is there a distinct skill of “noticing the obvious ideas” or “noticing the obvious open questions”? And can it be trained or turned into a repeatable process?
Yeah, I think there is a general skill of “noticing the obvious.” I don’t think I’m great at it, but one thing I do pretty often is reflect on the sorts of things that appear obvious now that weren’t obvious to smart people ~200 years ago.
I suspect that while sometimes ignoring/not noticing “obvious questions/advice” etc is coincidental unforced errors, more often than not there is some form of motivated reasoning going on behind the scenes (e.g., because this story will invalidate a hypothesis I’m wedded to, because it involves unpleasant tradeoffs, because some beliefs are lower prestige, because it makes the work I do seem less important, etc). I think training myself carefully to notice these things has been helpful, though I suspect I still miss a lot of obvious stuff.
(Just my personal, current, non-expert thoughts, as always. Also, I’m not sure I’m addressing precisely the question you had in mind.)
A summary of my recommendations in this vicinity:
- If people want to do research and want a menu of ideas/questions to work on, including ideas/questions that seem like they obviously should have a bunch of work on them but don’t yet, they could check out this central directory for open research questions, and/or an overlapping 80,000 Hours post.
If people want to discover “new” instances of such ideas/questions, one option might be to just try to notice ideas/variables/assumptions that seem important to some people’s beliefs, but that seem debatable and vague, have been contested by others, and/or haven’t been stated explicitly and fleshed out.
- One way to do this might be to have a go at rigorously, precisely writing out the arguments that people seem to be acting as if they believe, in order to spot the assumptions that seem required but that those people haven’t stated/emphasised.
- One could then try to explore those assumptions in detail, either just through more fleshed-out “armchair reasoning,” or through looking at relevant empirical evidence and academic work, or through some mixture of those things.
- I think this is a big part of what I’ve done this year.
- Here’s one example of a piece of my own work which came from roughly that sort of process.
I’ll add more detailed thoughts below.
I interpret this question as being focused on cases in which an idea/open question seems like it should’ve been obvious, or seems obvious in retrospect, yet it has been neglected so far. (Or the many cases we should assume still exist in which the idea/question is still neglected, but would – if and when finally tackled – seem obvious.)
It seems to me that there are two major types of such cases:
- Unnoticed: Cases in which the ideas/open questions haven’t even been noticed by almost anyone
- Or at least, almost anyone in the relevant community/field.
- So I’d still say an idea counts as “unnoticed” for these purposes even if, for example, a very similar ideas has been explored thoroughly in sociology, but no one in longtermism has noticed that that idea is relevant to some longtermist issue, nor independently arrived at a similar idea.
- Noticed yet neglected: Cases in which the ideas/open questions have been noticed, but no one has really fleshed them out or tackled them much
- E.g., a fair number of longtermists have noticed that the question of how likely various types of recovery are from various types of civilizational collapse. But as far as I’m aware, there was nothing even approaching a thorough analysis of the question until some recent still-in-progress work, and there’s still room for much more work here.
- Another example is questions related to how likely global, stable totalitarianism is; what factors could increase or decrease the odds of that; and what to do about this. Some people have highlighted such questions (including but not only in the context of advanced AI), but I’m not aware of any detailed work on them.
This is really more a continuum than a binary distinction. In almost all cases, there’s probably been someone in a relevant community who’s at least briefly noticed something relevant. But sometimes it’ll just be that something kind-of relevant has been discussed verbally a few times and then forgotten, while other times it’ll be that people have prominently highlighted pretty precisely the relevant open question, yet no one has actually worked on it. (And of course there’ll be many cases in between.)
For “noticed yet neglected” ideas/questions, recommendation 1 from above will be more relevant: people could find many ideas/questions of this type in this central directory for open research questions, and just get cracking on them.
That directory is like a map pointing the way to many trees that might be full of low-hanging fruit that would’ve been plucked by now in a better world. And I really would predict that a lot of EAs could do valuable work by just having a go at those questions. (I’m less confident that this is the most valuable thing lots of EAs could be doing, and each person would have to think that through for themselves, in light of their specific circumstances. See also.)
So we don’t necessarily need all EA-aligned researchers to try to cultivate a skill of “noticing the ideas that should’ve been tackled/fleshed out already” (though I’m sure some should). Some could just focus on actually exploring the ideas that have been noticed but still haven’t been tackled/fleshed out.
For “unnoticed” ideas/questions, recommendation 2 from above will be more relevant.
I think this dovetails somewhat with Ben Garfinkel calling for  more people to just try to rigorously write up more detailed versions of arguments about AI risk that often float around in sketchier or briefer form. (Obviously brevity is better than length, all else held equal, but often a few pages isn’t enough to give an idea proper treatment.)
There are at least two other approaches for finding “unnoticed” ideas/questions which seem to have sometimes worked for me, but which I’m less sure would often be useful for many people, and less sure I’ll describe clearly. These are:
- Trying to sketch out causal diagrams of the pathway to something (e.g., an existential catastrophe) happening
- I think that doing something like this has sometimes helped me notice there there are:
- assumptions or steps missing in the standard/fleshed-out stories of how something might happen,
- alternative pathways by which something could happen, and/or
- alternative/additional outcomes that may occur
- See also
- Trying to define things precisely, and/or to precisely distinguish concepts from each other, and seeing if anything interesting falls out
- Here’s an abstract example, but one which matches various real examples that have happened for me:
- I try to define X, but then notice that that definition would fail to cover some cases of what I’d usually think of as X, and/or that it would cover some cases of what I’d usually think of as Y (which is a distinct concept).
- This makes me realise that X and/or Y might be able to take somewhat different forms or occur via different pathways to what was typically considered, or that there’s actually an extra requirement for X or Y to happen that was typically ignored.
- I feel like it’d be easy to misinterpret my stance here.
- I actually think that definitions will never or almost never really be “perfect,” and I agree with the ideas in this post (see also family resemblance). And I think that many debates over definitions are largely nitpicking and wasting time.
- But I also think that, in many case, being clearer about definitions can substantially benefit both thought and communication.
I should again mention that I’m only ~ 1.5 years into my research career, so maybe I’ll later change my mind about a bunch of those points, and there are probably a lot of useful things that could be said on this that I haven’t said.
 See the parts of the transcript after Howie asks “Do you know what it would mean for the arguments to be more sussed out?”
We sometimes get tired or have trouble focusing. Sometimes this happens even when we’ve had enough sleep (just to get an obvious solution out of the way: sleep/napping). What are your favorite things to do when focusing seems hard or you feel tired? Do you use any particular nootropics, supplements, air quality monitor, music, or exercise routine?
Regular exercise certainly helps. Haven’t tried anything else. Mostly I’ve just acclimated to getting work done even though I’m tired. (Not sure I would recommend that “solution,” though!)
My favourite thing to do is to stop working! Not all days can be good days and I became a lot happier and more productive when I stopped beating myself up for having bad days and allowed myself to take the rest of the afternoon off.
I haven’t figured this out yet and am keen to learn from my coworkers and others! Right now I take a lot of caffeine and I suspect if I were more careful about optimization I should be cycling drugs over a weekly basis rather than taking the same one every day (especially a drug like caffeine that has tolerance and withdrawal symptoms).
I’ve had lot’s of ongoing and serious problems with fatigue and have tried many interventions. Certainly caffeine (ideally with l-theanine) is a nice thing to have but tolerance is an issue. Right now what seems to work for me (no idea why) is a greens powder called Athletic Greens. I’m also trying pro/prebiotics which might be helping. Magnesium supplementation also might have helped. A medication I was taking was causing some problems as well and causing me to have some really intense fatigue on occasion (again, probably…). It’s super hard to isolate cause and effect in this area as there are so many potential causes. I’d say it’s worth dropping a lot of money on different supplements and interventions and seeing what helps. If you can consistently increase energy by 5–10% (something I think is definitely on the table for most people), that adds up really quickly in terms of the amount of work you can get done, happiness, etc. Ideally you’d do this by introducing one intervention at a time for 2-4 weeks each. I haven’t had patience for that and am currently just trying a few things at once, then I figure I can cut out one at a time and see what helped. Things I would loosely recommend trying (aside from exercise, sleep, etc): Prebiotics, good multivitamins, checking for food intolerances, checking if any pills you take are having adverse effects.
I do also work through tiredness sometimes and find it helpful to do some light exercise (for me, games in VR) to get back some energy. That also works as a decent gauge for whether I’ll be able to push past the tiredness. If playing 10 min of Beatsaber feels like a chore, I probably won’t be able to work.
How you rest might also be important. E.g. might need time with little input so your default mode network can do it’s thing. No idea how big of a deal this is but I’ve found going for more walks with just music (or silence) to maybe be helpful, especially in that I get more time for reflection.
I’ve also been experimenting with measuring heart rate variability using an app called Welltory. That’s been kind of interesting in terms of raising some new questions though I’m still not sure how I feel about it/how accurate it is for measuring energy levels.
I’ve bought some Performance Lab products (following a recommendation from Alex in a private conversation). They have better reviews on Vaga and are a bit cheaper than the Athletic Greens.
I find that being tired makes my mind wander a lot when reading longform things (e.g., papers, posts, not things like Slack messages or emails), so when I’m tired I usually try to do things other than reading.
If I’m just a bit or moderately tired, I usually find I’m still about as able to write as normal. If I’m very tired, I’ll still often be able to write quickly, but then when I later read what I wrote I’ll feel that it was unclear, poorly structured, and more typo-strewn than usual. So when very tired, I try to avoid writing longform things (e.g., actual research outputs).
Things I find I’m still pretty able to do when tired include commenting on documents people want input on (I think I’m more able to focus on this than on regular reading because it’s more “interactive” or something), writing things like EA Forum comments, replying to emails and Slack messages and the like, doing miscellaneous admin-y tasks, and reflecting on the last week/month and planning the next. So I often do a disproportionate amount of such tasks during evenings or during days when I’m more tired than normal, and at other times do a disproportionate amount of reading and “substantive” writing.
Also, I’m fortunate enough to have flexible hours. So sometimes I just work less on days when I’m tired (perhaps spending more time with my wife), and then make up for it on other days.