With the Future of the World in Your Hands, Think for 6.77 Years!
How long should you think about a problem that could determine the future of all existence? A mathematical journey from simple bets to existential risks.
Imagine a simple game. There’s a jar full of pebbles, and you’re offered a bet. If you can estimate the number of pebbles to within 10%, you win $100,000. If you’re wrong, you lose $100,000. You have as much time as you want to think. How long should you spend? A day? A week? A year?
Now, imagine a much bigger game. You are part of a team that wants to start OpenAI. Your technology could be revolutionary, potentially ushering in an era of explosive economic growth and solving many of the world’s problems. But it also carries risks. If mishandled, it could lead to a global catastrophe, perhaps even human extinction.
Faced with this monumental uncertainty, how much time should they spend thinking about the risks, the ethics, and the safeguards before pushing forward?
This question feels impossibly complex. But we can use simplified mathematical models to make it more concrete. The goal isn’t to find a single “right” answer, but to build a framework that clarifies the trade-offs involved. What follows is an exploration in three models, moving from a simple world to a progressively more complex and realistic one.
(Special thanks to Gemini 2.5 for guiding me through this mathy adventure!)
The Ground Rules: Our Simplified World
To make the problem solvable, we have to make some simplifications. It’s crucial to state these upfront. After all, all models are wrong, but some are useful.
Risk-neutrality: We’ll assume our decision-maker is risk-neutral. They don’t get more pain from losing M than they get joy from winning M. This is a huge simplification, especially when one of the outcomes is “the end of the world,” but it keeps the math tractable.
Binary outcome: The game is all-or-nothing: explosive growth or total loss. The real world has a spectrum of outcomes.
Thinking helps: We’ll assume that spending time t thinking improves your probability of success, p(t). We’ll model this with a “diminishing returns” curve. Your first hour of thinking helps a lot; your thousandth hour helps much less.
Specifically, we’ll model the probability of success with the function:
Here, p_0 is your initial guess (let’s say 50%), p_max is the best you could ever do (say, 95%), and k is your “learning rate.” (I can’t use formulas inline in this editor.) To make k intuitive, we’ll define it by a “dumbness half-life”: the time it takes you to do half of all the learning you’ll ever do. As we’ll see, a longer half-life means a smaller k.
For example, for a dumbness half-life of t = 10 years, we get k ≈ 0.07. This seems like a very optimistic estimate of the progress the AI safety movement has made over the past ten years that I’ve been following it.
With these rules in place, let’s play the game.
Model 1: Is This Worth My Time?
In the simplest world, your time has a direct opportunity cost. If you spend an hour thinking about pebbles, you’re not spending that hour working your job.
Let's say your time is worth w dollars per year. The net value of playing the game is the expected winnings minus the cost of your time.
The first term are the expected winnings, from which we subtract the cost of your time.
To maximize this, a little calculus tells us we should stop thinking at the exact moment when the marginal benefit of one more hour of thinking equals the marginal cost of that hour. This gives us the condition:
Here, p'(t) is the rate at which your probability is improving. The insight here is clear: the stakes M are crucial. If M is $1,000,000, the left side of the equation is large, justifying a lot of thinking to balance it against your wage w. If M is just $1, the left side is tiny, and you should stop thinking almost immediately because it’s not worth your time. For example, assuming the initial guess is a coin toss, we have an annual salary of $50k, we stand to win or lose $1 million, that we can’t do better than 95% certainty, and with our k = 0.07 from above, we should think for 3.3 years.
This model matches our basic economic intuition. But what if the cost isn’t just our wage?
Model 2: The Cost of Delay
Let’s change the scenario. You’re not just winning money; you’re winning an investment that grows exponentially. The biggest cost of thinking isn’t your hourly wage, but the cost of delaying that investment. Every hour you spend thinking is an hour that potential fortune is not in the market growing.
Let’s assume the market provides a constant annual return of r. The value of winning the game at time t is discounted by the growth you missed. The new goal is to maximize the expected future value:
When we do the calculus to maximize this, something astonishing happens. The optimization condition becomes:
The stake M has completely vanished from the equation!
Why? Because M scales both the potential reward and the opportunity cost of delay equally. If the stakes are high, the potential gain from thinking is high, but the cost of not having that money invested is also high. These two effects perfectly cancel out.
The decision is now a pure battle between your personal learning rate, k, and the market’s growth rate, r. Assuming we start with a 50% chance of success, this simplifies to a beautiful closed-form formula for the optimal thinking time, t*:
Let’s run the numbers:
Assume a “dumbness half-life” of 10 years. This corresponds to an hourly learning rate of k ≈ 0.07 (see above). We’ll again assume p_0 = 0.5 and p_max = 0.95.
Scenario A: 30% Annual Growth: Think for 3 years.
Scenario B: 100% Annual Growth: Think for 1 year.
This is a powerful and counter-intuitive result. It suggests that if your decision is about deploying a resource that grows exponentially, the size of that resource doesn’t matter. What matters is how fast you learn relative to how fast the world grows.
One might wonder why anyone would think for years to win or lose $1, but that’s just a limitation of the model. In the real world we have opportunity costs outside the game, in the model we don’t, but if we use the model in analogy with existential catastrophes, the stakes are high enough for it to make sense.
Model 3: An Accelerating World
Our last model was interesting, but is a “constant growth rate” realistic? Research on long-term historical trends suggests that economic growth isn't just exponential, it's been superexponential. As the world economy has gotten larger, the doubling time has gotten shorter. Extrapolating this trend, as David Roodman has done, suggests we could be heading for a period of explosive, near-vertical growth – an economic singularity – sometime this century.
What does our model say if we live in that world? The logic is the same: we want to maximize our expected value. But the math becomes more beautiful and more complex.
Step 1: Defining Value in a Speeding-Up World
The core change is that the rate of return, r, is no longer a constant. It's now a function of time, r(t), that starts small and grows. The value of our winnings M at the end of our investment horizon now depends on integrating this changing rate over the time we are invested.
The Expected Future Value, EV(t), is:
Here, the integral simply sums up all the growth between the time you finish thinking, t, and some distant future time, T_horizon.
Step 2: Finding the Optimum and Purging M
To find the optimal thinking time, we again take the derivative of EV(t) with respect to t and set it to zero. This requires the product rule and a bit of calculus for integrals – specifically, the Fundamental Theorem of Calculus, which tells us:
The derivative is:
This looks messy, but notice two things:
The term e^… is in both parts. We can factor it out and, since it’s never zero, eliminate it.
The stake M is also a factor in every single term.
Let’s factor them both out:
For this to be true, the part in the brackets must be zero. And just like that, M vanishes once again! The fundamental cancellation we saw in Model 2 holds even in this more complex world. The optimization rule is:
The trade-off is still a pure battle between your learning rate and the world’s growth rate. The only difference is that the world’s growth rate is now a moving target.
Step 3: Getting Specific and Hitting a Wall
To solve this, we must now define our growth function. We model the accelerating growth with a hyperbolic curve. If the singularity is at time T_singularity and the growth rate is anchored by a constant C, then:
Substituting these into our optimization rule gives us a specific equation to solve for t.
Here t is trapped both inside and outside the exponent. This is a transcendental equation and has no solution using elementary functions.
Step 4: The Lambert W Function and the Final Formula
To write down a formal solution, we need a special tool: the Lambert W function. This function is defined as the solution to the equation z = xe^x. If you have an equation in that form, the solution is simply x = W(z).
It is a standard, well-understood function in mathematics, available in tools like Wolfram Alpha and Python’s SciPy library. By performing some clever algebraic manipulation, our transcendental equation can be rearranged into the required form. The resulting closed-form solution for the optimal thinking time, t*, is:
This formula, while not simple, is the true, general solution. It elegantly combines the key parameters of the problem:
T_singularity: How much time the world has left.
C: How fast the world is accelerating.
k: How fast you can learn.
W: The mathematical glue needed to solve this type of growth problem.
Z: The ratio of growth pressure to learning potential.
So, What’s the Answer?
For those of us who aren’t using special functions every day, the most practical way to solve this is to ask a computational tool to solve the equation from Step 3 directly.
Let’s plug in the numbers for a person with our standard 10-year dumbness half-life (k ≈ 0.07) living in a world that starts with 8% annual growth today and accelerates towards a singularity in 2047, which we need to exclude to get a finite result.
This gives us T_singularity ≈ 22 years and C = 1.76.
Solving for t gives an optimal thinking time of about 6.77 years.
This is the most stunning result of all. In a world poised for an imminent economic explosion, the rational choice is not to rush, but to think for a very long time. Why? Because the opportunity cost of delay is low now. The explosive growth is in the future. You have a limited window – a “calm before the storm” – where thinking is cheap. The model suggests you should use nearly all of that window to improve your chances of getting the monumental outcome right.
Conclusion: The Humility of Models
So, what have we learned?
In a simple world, you should think more about bigger problems.
If the problem involves deploying a resource that grows exponentially, the size of the resource becomes irrelevant. It’s a race between your learning and the world’s growth.
If that growth is accelerating, the rational strategy may be to think for a surprisingly long time to take advantage of the low initial opportunity cost.
Of course, we must return to reality. These models are not reality. The real world is not risk-neutral, outcomes aren’t binary, and “thinking” is a complex, multi-faceted activity. A startup building transformative AI is not observing an external growth rate; it is creating it to some unknown extent, a paradox that collapses the logic of the last two models.
The particular values we choose for various constants has a great influence too. Maybe the world is so fragile that a random strategy is only 10% likely to succeed or so resilient that it’s 90% likely to succeed. Maybe our peak certainty is capped well below 95%. Maybe learning is much slower than the dumbness half-life of 10 years would have us believe.
But even a flawed map can point you in the right direction. These models challenge our intuition and provide a language for discussing otherwise intractable problems. They suggest that for the most important decisions in history, the question of “how long to think” is not trivial. It is a deep, difficult, and profoundly mathematical trade-off. Perhaps the most important takeaway is that for these challenges, the “thinking” – about the goals, the risks, and the very definition of success – is some of the most valuable work we can do.