Structure

There is a certain structure to the answers or to the effects that the answers anticipate.

Better collective decision-making has the two parts of being collective and being better. These have their individual failure modes:

  1. Failure modes of collective, cooperative, or collaborative systems can be
    1. failure modes that follow directly from the collaborative nature of the system, and, a bit more speculatively,
    2. they can also be failure modes that follow from a potential effect such systems might have to incentivize or facilitate more collaboration generally.
  2. Failure modes of better decisions are a more monolithic group. They could be divided into adverse effects vs. intentional abuse or into effects of the process of determining good decisions vs. the decisions themselves, but in both cases, one category would contain only one answer at the moment, so it doesn’t seem worthwhile to make this distinction.

Summaries

Here I’ll briefly summarize the most worrying answers. These summaries may seem weakly argued or cryptic. If so, please see the full text for further clarifications.

Collaborative Systems

Direct Effects

This corresponds to 1.a. of the structure above. Collaborative systems may backfire in at least the following ways:

  1. Legibility. Communication between people will make it necessary to make knowledge more “legible” (in the Seeing Like A State sense1). This can have several adverse effects:
    1. Discrimination against illegible values. It may be very hard to model the full complexity of moral intuitions of people. In practice, people tend toward systems that greatly reduce the dimensionality of what people typically care about. The result are utils, DALYs, SWB, consumption, life years, probability of any existential catastrophe, etc. Collaborative systems would incentivize such low-dimensional measures of value, and through training, people may actually come to care about them more. It’s contentious and a bit circular to ask whether this is good or bad. But at least it’s not clearly neutral or good.
    2. Centralized surveillance. My answer titled “Legibility” argues that collaborative methods will make it necessary to make considerations and values more legible than they are now so they can be communicated and quantified. This may also make them more transparent and thus susceptible to surveillance. That, in turn, may enable more powerful authoritarian governments, which may steer the world into a dystopian lock-in state.
  2. Averaging effect. Maybe there are people who are particularly inclined toward outré opinions. These people will be either unusually right or unusually wrong for their time. Maybe there’s more benefit in being unusually right than there is harm in being unusually wrong (e.g., thanks to the law). And maybe innovation toward most of what we care about is carried by unusually right people. (I’m thinking of Newton here, whose bad ideas didn’t seem to have much of an effect compared to his good ideas.) Collaborative systems likely harness – explicitly or implicitly – some sort of wisdom of the crowds type of effect. But such an effect is likely to average away the unusually wrong and the unusually right opinions. So such systems might slow progress.
  3. Silencing of lone dissenters. The averaging effect above may be used intentionally to silence individual or sufficiently small groups that disagree with the majority. E.g., Anna might only be productive at 20°C with SD of 1°C. She joins a company that has determined in a long, collaborative process that their employees are most productive at 22°C with SD of 5°C, and so has put the temperature in the office at 22°C. If Anna could restart that long collaborative process, the optimal temperature may now be 21°C. But her boss tells her to just go ahead and rerun the process in her free time, which both know is infeasible given how much work it is and how many people it involves.
  4. More power to the group. It might be that the behavior of groups (e.g., companies) is generally worse (e.g., more often antisocial) than that of individuals. Collaborative systems would shift more power from individuals to groups. So that may be undesirable.

Incidental, Concomitant Effects

This corresponds to 1.b. of the structure above. Collaborative systems may incentivize cooperation and collaboration (this is an additional inferential step, though), which are probably vastly positive on balance but may backfire in at least the following ways. I didn’t originally include answers like these because of the additional inferential step, which I think is an unlikely one too, so these are probably particularly incomplete.

  1. Collusion. Cooperation can enable collusion, cartels, and bribes. Various antisocial behaviors that wouldn’t be as easy if there were no way to trust the other party. (See this comment.)
  2. Exploitation, deception, and coercion. Promoting cooperation will go through promoting capacities that enable cooperation. Such capacities include understanding, recognizing honesty and deception, and commitment. Those same capacities can also be used to understand others’ vulnerabilities, deceive them, and commit to threats. Those, however, can also be prosocial again, e.g., in the case of laws that are enforced through a threat of a fine. (See this comment.)
  3. Reduced competition. Cooperation may reduce competition, which would’ve facilitated learning or training. (See this comment.)
  4. Value lock-in. The majority may use the enhanced coordination to lock-in its current values and thereby prevent whatever we regard as moral progress from continuing. (See this answer.)

Better Decisions or Epistemics

This corresponds to 2. of the structure above. Better decisions or better epistemics may backfire in at least the following ways:

  1. Overconfidence may be necessary for motivation. E.g., entrepreneurs are said to be overconfident that their startup ideas will succeed. Maybe increased rationality (individual or collective) will stifle innovation. (See section Psychological Effects.)
  2. The sunk cost fallacy may be necessary for sustained motivation. E.g., EAs are known to start a lot of projects and abandon them again quickly when they learn of something even better they could do. That might continue indefinitely, so that no project gets off the ground. (See section Psychological Effects.)
  3. Decision theoretic catch-22s. The Soares and Fallenstein paper “Toward Idealized Decision Theory” presents an example where a dumber agent who knows that they are playing against a smarter agent can exploit the fact that the smarter agent knows more about them than they do about the smarter agent. (See section Modified Ultimatum Game.)
  4. Less experimentation through convergence to best practices. Good epistemics may make it patently obvious what the best known (no hyphen) practices are. Most actors benefit individually from adopting the best practices. Hence, fewer of them experiment with new practices that they find promising, and so we may get stuck in some local optimum. (See this answer.)
  5. Benefiting malevolent actors. Systems to produce better epistemics will likely be value neutral. Hence they can be abused by malevolent actors. (See section Benefiting Unscrupulous People.)
  6. Conflict through coordination failures. Better epistemics may function like weapons that can be used against other groups. This may lead to arms races, full-blown conflicts, or may cause negative externalities for groups that can’t defend themselves – nonhumans, future people, et al. (See this answer.)
  7. Social ramifications. Using systems to generate better decisions may necessitate actions that have negative social effects. (See section Social Effects.)
  8. Loss of valuable ambiguity. It might be that ambiguity plays an important role in social interactions. There is the stylized example where it is necessary to keep the number of rounds of an iterated game a secret or else that knowledge will distort the game. I’ve also read somewhere that there’s the theory that conflicts between two countries can be exacerbated if the countries have too low-quality intelligence about each other but also if they have too high-quality intelligence about each other. But I can’t find the source, so I’m likely to misremember something. Charity evaluators also benefit from ambiguity in that fewer charities would be willing to undergo their evaluation process if the only reason why a charity would either decline it or block the results from being published were reasons that reflect badly on the charity. But there are also good and neutral reasons, so charities will always have plausible deniability.

Conclusion

My current take is that the advantages probably outweigh the risks but that it would be negligent not to try to make the risks as low as possible.

My current take is also that the risks from a particular system are probably proportional to the popularity of the system so long as it is marketed to everyone equally. Most likely, this means that we can stay vigilant, observe whether any of these risks or new ones manifest, and then correct course or abandon the project. But it might also mean that in very rare cases we’ll see such run-away growth that we can’t control it anymore.

Anyone might develop such a system at any time, and the independent, almost two-year-old startup Causal proves that this can happen. So if it were assured that there won’t be any run-away growth, I’d be quite confident that the best approach is to develop such a system, and design and market it carefully with all these risk factors in mind.

But since there is a remote risk of run-away growth, I’m unsure about this conclusion. I’m leaning toward the assessment that the probability of run-away growth is so tiny (and will likely happen within groups that are somewhat cooperative if it happens) that it can be ignored. But maybe one of the best uses of such a system is to test whether it’s a good idea to continue developing the system.

The Question

I’ve started work on a project that aims to do something like “improving our collective decision-making.”2 Broadly, it’s meant to enable communities of people who want to make good decisions to collaboratively work out what these decisions are. Individual rationality is helpful for that but not the focus.

Concretely, it’s meant to make it easier to collaborate on probabilistic models in areas where we don’t have data. You can read more about the vision in Ozzie Gooen’s Less Wrong post on Squiggle. But please ignore this for the sake of this question. I hope to make the question and answers to it useful to a broader audience by not narrowly focusing it on the concrete thing I’m doing. Other avenues to improve collective decision-making may be improving prediction markets and developing better metrics for things that we care about.

Before I put a lot of work into this, I would like to check whether this is a good goal in the first place – ignoring tractability and opportunity costs.3 By “good” I mean something like “robustly beneficial,” and by “robustly beneficial” I mean something like “beneficial across many plausible worldviews and futures, and morally cooperative.”

My intuition is that it’s about as robustly positive as it gets, but I feel like I could easily be wrong because I haven’t engaged with the question for a long time. Less Wrong and CFAR seem to have similar goals, though I perceive a stronger focus on individual rationality. So I like to think that people have thought and maybe written publicly about what the major risks are in what they do.

I would like to treat this question like a Stack Exchange question where you can also submit your own answers.[^3] But I imagine there are many complementary answers to this question. So I’m hoping that people can add more, upvote the ones they find particularly concerning, important, or otherwise noteworthy, and refine them with comments.

For examples of pretty much precisely the type of answers I’m looking for, see my answers titled “Psychological Effects” and “The Modified Ultimatum Game.”

A less interesting answer is my answer “Legibility.” I’m less interested in it here because it describes a way in which a system can fail to attain the goal of improving decision-making rather than a way in which the successful realization of that goal backfires. I wanted to include it as a mild counterexample.

If you have ideas for further answers, it would be interesting if you could also think of ways to work around them. It’s usually not beneficial to abandon a project if there is any way in which it can backfire but to work out how the failure mode can be avoided without sacrificing all of the positive effects of the project.

You can also message me privately if you don’t want to post your answer publicly.

Acknowledgements: Thanks for feedback and ideas to Sophie Kwass, Ozzie Gooen, Justin Shovelain, and everyone who answered in the EA Forum!

The Answers

Legibility

This is a less interesting failure mode as it is one where the systems that we create to improve our decision-making actually fail to achieve that goal. It’s not one where successfully achieving that goal backfires.

I also think that while this may be a limitation of some collaborative modeling efforts, it’s probably no problem for prediction markets.

The idea is that collaborative systems will always, at some stage, require communication, and specifically communication between brains rather than within brains. To make ideas communicable, they have to be made legible. (Or maybe literature, music, and art are counterexamples.) By legible, I’m referring to the concept from Seeing Like A State.1

In my experience, this can be very limiting. Take for example what I’ll call the Cialdini puzzle:

Robert Cialdini’s Wikipedia page says “He is best known for his book Influence“. Since its publication, he seems to have spent his time directing an institute to spread awareness of techniques for success and persuasion. At the risk of being a little too cynical – a guy knows the secrets of success, so he uses them to… write a book about the secrets of success? If I knew the secrets of success, you could bet I’d be doing much more interesting things with them. All the best people recommend Cialdini, and his research credentials are impeccable, but I can’t help wondering: If he’s so smart, why isn’t he Emperor?

It seems to me like a common pattern that for certain activities the ability to do them well is uncorrelated or even anticorrelated with the ability to explain them. Some of that may be just because people want to keep their secrets, but I don’t think that explains much of it.

Hence Robert Cialdini may be > 99th percentile at understanding and explaining social influence, but in terms of doing social influence, that might’ve boosted him from the 40th to the 50th percentile or so. (He says his interest in the topic stems from his being particularly gullible.) Meanwhile, all the people he interviews because they have a knack for social influence are probably 40th to 50th percentile at explaining what they do. I don’t mean that they are average at explaining in general but that what they do is too complex, nuanced, unconscious, intertwined with self-deception, etc. for them to grasp it in a fashion that would allow for anything other than execution.

Likewise, a lot of amazing, famous writers have written books on how to write. And almost invariably these books are… unhelpful. If these writers followed the advice they set down in their own books, they’d be lousy writers. (This is based on a number of Language Log posts on such books.) Meanwhile, some of the most helpful books on writing that I’ve read were written by relatively unknown writers. (E.g., Style: Toward Clarity and Grace.)

My learning of Othello followed a similar trajectory. I got from a Kurnik rating of 1200 up to 1600 quite quickly by reading every explanatory book and text on game strategy that I could find and memorizing hundreds of openings. Beyond that, the skill necessary to progress further becomes too complex, nuanced, and unconscious that, it seems to me, it can only be attained through long practice, not taught. (Except, of course, if the teaching is all about practice.) And I didn’t like practice because it often meant playing against other people. (That is just my experience. If someone is an Othello savant, they may rather feel like some basic visualization practice unlocked the game for them, so that they’d still have increasing marginal utility from training around the area where it started dropping for me.)

Orthography is maybe the most legible illegible skill that I can think of. It can be taught in books, but few people read dictionaries in full. For me it sort of just happened rather suddenly that from one year to the next, I made vastly fewer orthographic mistakes (in German). It seems that my practice through reading must’ve reached some critical (soft) threshold where all the bigrams, trigrams, and exceptions of the language became sufficiently natural and intuitive that my error rate dropped noticeably.

For this to become a problem there’d have to be highly skilled practitioners, like the sort of people Cialdini likes to interview, who are brought together by a team or researchers to help them construct a model of some long-term future trajectory.

These skilled practitioners will do exactly the strategically optimal thing when put in a concrete situation, but in the abstract environment of such a probabilistic model, their predictions may be no better than anyone’s. It’ll take well-honed elicitation methods to get high-quality judgments out of these people, and then a lot of nuance may still be lost because what is elicited and how it fits into the model is probably again something that the researchers will determine, and that may be too low-fidelity.

Prediction markets, on the other hand, tend to be about concrete events in the near future, so skilled practitioners can probably visualize the circumstances that would lead to any outcome in sufficient detail to contribute a high-quality judgment.

Modified Ultimatum Game

A very good example of the sort of risks that I’m referring to is based on a modified version of the ultimatum game and comes from the Soares and Fallenstein paper “Toward Idealized Decision Theory”:

Consider a simple two-player game, described by Slepnev (2011), played by a human and an agent which is capable of fully simulating the human and which acts according to the prescriptions of [Updateless Decision Theory (UDT)]. The game works as follows: each player must write down an integer between 0 and 10. If both numbers sum to 10 or less, then each player is paid according to the number that they wrote down. Otherwise, they are paid nothing. For example, if one player writes down 4 and the other 3, then the former gets paid $4 while the latter gets paid $3. But if both players write down 6, then neither player gets paid. Say the human player reasons as follows:

I don’t quite know how UDT works, but I remember hearing that it’s a very powerful predictor. So if I decide to write down 9, then it will predict this, and it will decide to write 1. Therefore, I can write down 9 without fear.

The human writes down 9, and UDT, predicting this, prescribes writing down 1.

This result is uncomfortable, in that the agent with superior predictive power “loses” to the “dumber” agent. In this scenario, it is almost as if the human’s lack of ability to predict UDT (while using correct abstract reasoning about the UDT algorithm) gives the human an “epistemic high ground” or “first mover advantage.” It seems unsatisfactory that increased predictive power can harm an agent.

A solution to this problem would have to come from the area of decision theory. It probably can’t be part of the sort of collaborative decision-making system that we envision here. Maybe there is a way to make such a problem statement inconsistent because the smarter agent would’ve committed to writing down 5 and signaled that sufficiently long in advance of the game. Ozzie also suggests that introducing randomness along the lines of the madman theory may be a solution concept.

Benefitting Unscrupulous People

A system that improves collective decision making is likely value-neutral, so it can also be used by unscrupulous agents for their nefarious ends.

Moreover unscrupulous people may benefit from it more because they have fewer moral side-constraints. If set A is the set of all ethical, legal, cooperative methods of attaining a goal, and set B is the set of all methods of attaining the same goal, then A ⊆ B. So it should always be as easy or easier to attain a goal by any means necessary than only by ethical, legal, and cooperative means.

Three silver linings:

  1. Unscrupulous people probably also have different goals from ours. Law enforcement will block them from attaining those goals, and better decision-making will hopefully not get them very far.
  2. These systems are collaborative, so you can benefit from them more the more people collaborate on them (I’m not saying monotonically, just as a rough tendency). When you invite more people into some nefarious conspiracy, then the risk that one of them blows the whistle increases rapidly. (Though it may depend on the structure of the group. There are maybe some terrorist cells who don’t worry much about whistleblowing.)
  3. If a group is headed by a narcissistic leader, the person may see a threat to their authority in a collaborative decision-making system, so that they won’t adopt it to begin with. (Though it might be that they like that collaborative systems can make it infeasible for individuals to use them to put their individual opinions to the test, so that they can silence individual dissenters. This will depend a lot on implementation details of the system.)

More speculatively, we can also promote and teach the system such that everyone who learns to use it also learns about multiverse-wide superrationality alias evidential cooperation in large worlds (ECL). Altruistic people with uncooperative agent-neutral goals will reason that they can now realize great gains from trade by being more cooperative or else lose out on them by continuing to defect.

We can alleviate the risk further by marketing the system mostly to people who run charities, social enterprises, prosocial research institutes, and democratic governments. Other people will still learn about the tools, and there are also a number of malevolent actors in those generally prosocial groups, but it may shift the power a bit toward more benevolent people. (The Benevolence, Intelligence, and Power framework may be helpful in this context.)

Finally, there is the option to make it hard to make models nonpublic. But that would have other downsides, and it’s also unlikely to be a stable equilibrium as others will just run a copy of the software on their private servers.

Psychological Effects

Luke Muehlhauser warns that overconfidence and sunk cost fallacy may be necessary for many people to generate and sustain motivation for a project. (But note that the post is almost nine years old.) Entrepreneurs are said to be overconfident that their startup ideas will succeed. Maybe increased rationality (individual or collective) will stifle innovation.

I feel that. When I do calibration exercises, I’m only sometimes mildly overconfident in some credence intervals, and indeed, my motivation usually feels like, “Well, this is a long shot, and why am I even trying it? Oh yeah, because everything else is even less promising.” That could be better.

On a community level it may mean that any community that develops sufficiently good calibration becomes demotivated and falls apart.

Maybe there is a way of managing expectations. If you grow up in an environment where you’re exposed to greatly selection-biased news about successes, your expectations may be so high that any well-calibrated 90th percentile successes that you project may seem disappointing. But if you’re in an environment where you constantly see all the failures around you, the same level of 90th percentile success may seem motivating.

Maybe that’s also a way in which the EA community backfires. When I didn’t know about EA, I saw around me countless people who failed completely to achieve my moral goals because they didn’t care about them. The occasional exceptions seemed easy to emulate or exceed. Now I’m surrounded by people who’ve achieved things much greater than my 90th percentile hopes. So my excitement is lower even though my 90th percentile hopes are higher than they used to be.

Social Effects

Beliefs are often entangled with social signals. This can pose difficulties for what I’ll call in the following a “truth-seeking community.”

When people want to disassociate from a disreputable group – say, because they’ve really never had anything to do with the group and don’t want that to change – they can do this in two ways: They can steer clear of anything that is associated with the disreputable group or they can actively signal their difference from the disreputable group.

Things that are associated with the disreputable group are, pretty much necessarily, things that are either sufficiently specific that they rarely come up randomly or things that are common but on which the group has an unusual, distinctive stance. Otherwise these things could not serve as distinguishing markers of the group.

If the disreputable group is small, is distinguished by an unusual focus on a specific topic, and a person wants to disassociate from them, it’s usually enough to steer clear of the specific topic, and no one will assume any association. Others will start out with a prior that the person < 1% likely to be part of the group, and absent signals to the contrary, will maintain that credence.

But if the disreputable group is larger, at least in one’s social vicinity, or the group’s focal topic is a common one, then one needs to countersignal more actively. Others may start out with a prior that the person is ~ 30% likely to be part of the group and may avoid contact with them unless they see strong signals to the contrary. This is where people will find it necessary to countersignal strongly. Moreover, once there is a norm to countersignal strongly, the absence of such a signal or a cheaper signal will be doubly noticeable.

I see two, sometimes coinciding, ways along which that can become a problem. First, the disreputable group may be so because of their values, which may be extreme or uncooperative, and it is just historical contingency that they endorse some distinctive belief. Or second, the group may be disreputable because they have a distinctive belief that is so unusual as to reflect badly on their intelligence or sanity.

The first of these is particularly problematic because the belief can be any random one with any random level of likelihood, quite divorced from the extreme, uncooperative values. It might also not be so divorced, e.g., if it is one that the group can exploit to their advantage if they convince the right people of it. But the second is problematic too.

If a community of people who want to optimize their collective decision-making (let’s call it a “truth-seeking community”) builds sufficiently complex models, e.g., to determine the likelihood of intelligent life re-evolving, then maybe at some point they’ll find that one node in their model (a Squiggle program, a Bayesian network, vel sim.) would be informed by more in-depth research of a question that is usually associated with a disreputable group. They can use sensitivity analysis to estimate the cost it would have to leave the node as it is, but maybe it turns out that their estimate is quite sensitive to that node.

In the first case, in the case of a group that is disreputable by dint of their values, that is clearly a bad catch-22.

But it can also go wrong in the second case, the case of the group that is disreputable because of their unusual beliefs, because people in the truth-seeking community will usually find it impossible to assign a probability of 0 to any statement. It might be that their model is very sensitive to whether they assign 0.1% or 1% likelihood to a disreputable belief. Then there’s a social cost also in the second case: Even though their credence is low either way, the truth-seeking community will risk being associated with a disreputable group (which may assign > 90% credence to the belief), because they engage with the belief.

I see five ways in which this is problematic:

  1. Exploitation of the community by bad actors: The truth-seeking community may be socially adroit, and people will actually grant them some sort of fool’s licence because they trust their intentions. But that may turn out to be exploitable: People with bad intentions may use the guise of being truth-seeking to garner attention and support while subtly manipulating their congregation toward their uncooperative values. (Others may only be interested in the attention.) Hence such a selective fool’s licence may erode societal defenses against extreme, uncooperative values and the polarization and fragmentation of society that they entail. Meanwhile the previously truth-seeking community may be overtaken by such people, who’ll be particularly drawn to its influential positions while being unintimidated with the responsibility that comes with these positions.
  2. Exploitation of the results of the research by bad actors: The same can be exploitable in that the truth-seeking community may find that some value-neutral belief is likely to be true. Regardless of how value-neutral the belief is, the disreputable group may well be able to cunningly reframe it to exploit and weaponize it for their purposes.
  3. Isolation of and attacks on the community: Conversely, the truth-seeking community may also not be sufficiently socially adroit and still conduct their research. Other powerful actors – potential cooperation partners – will consider the above two risks or will not trust the intentions of the truth-seeking community in the first place, and so will withhold their support from the community or even attack it. This may also make it hard to attract new contributors to the community.
  4. Internal fragmentation through different opinions: The question whether the sensitivity of the model to the controversial belief is high enough to warrant any attention may be a narrow one, one that is not stated and analyzed very explicitly, or one that is analyzed explicitly but through models that make contradictory predictions. In such a case it seems very likely that people will arrive at very different predictions as to whether it’s worse to ignore the belief or to risk the previous failure modes. This can lead to fragmentation, which often leads to the demise of a community.
  5. Internal fragmentation through lack of trust: The same internal fragmentation can also be the result of decreasing trust within the community because the community is being exploited or may be exploited by bad actors along the lines of failure mode 1.
  6. Collapse of the community due to stalled recruiting: This applies when the controversial belief is treated as a serious infohazard. It’s very hard to recruit people for research without being able to tell them what research you would like them to do. This can make recruiting very or even prohibitively expensive. Meanwhile there is usually some outflow of people from any community, so if the recruitment is too slow or fully stalled, the community may eventually vanish. This would be a huge waste especially if the bulk of the research is perfectly uncontroversial.

I have only very tentative ideas of how these risks can be alleviated:

  1. The community will need to conduct an appraisal, as comprehensive and unbiased as possible, of all the expected costs/harms that come with engaging with controversial beliefs.
  2. It will need to conduct an appraisal of the sensitivity of its models to the controversial beliefs and what costs/harms can be averted, say, through more precise prioritization, if the truth value of the beliefs is better known.
  3. Usually, I think, any specific controversial belief will likely be close to irrelevant for a model so that it can be safely ignored. But when this is not the case, further safeguards can be installed:
  4. Engagement with the belief can be treated as an infohazard, so those who research it don’t do so publicly, and new people are onboarded to the research only after they’ve won the trust of the existing researchers.
  5. External communication may take the structure of a hierarchy of tests, at least in particularly hazardous cases. The researchers need to gauge the trustworthiness of a new recruit with questions that, if they backfire, afford plausible deniability and can’t do much harm. Then they only gradually increase the concreteness of the questions if they learn that the recruit is well-intentioned and sufficiently open-minded. But this can be uncooperative if some codes become known, and then people who don’t know them use them inadvertently.
  6. If the risks are mild, there may be some external communication. In it, frequent explicit acknowledgements of the risks and reassurances of the intentions of the researchers can be used to cushion the message. But these signals are cheap, so they don’t help if the risks are grave or others are already exploiting these cheap signals.
  7. Internal communication needs to frequently reinforce the intentions of the participants, especially if there are some among them who haven’t known the others for a long time, to dispel worries that some of them may practice other than prosocial, truth-seeking intentions.
  8. Agreed-upon procedures such as voting may avert some risk of internal fragmentation.

An example that comes to mind is a situation when a friend of mine complained about the lacking internal organization of certain unpolitical (or maybe left-wing) groups and contrasted it with a political party that was very well organized internally. It was an, in our circles, highly disreputable right-wing party. His statement was purely about the quality of the internal organization of the party, but I only knew that because I knew him. Strangers at that meetup might’ve increased their credence that he agrees with the policies of that party. Cushioning such a mildly hazardous statement would’ve gone a long way to reduce that risk and keep the discussion focused on value-neutral organizational practices.

Another disreputable opinion is that of Dean Radin who seems to be fairly confident that there is extrasensory perception, in particular (I think) presentiment on the timescale of 3–5 s. He is part of a community that, from my cursory engagement with it, seems to not only assign a nonzero probability to these effects and study them for expected value reasons but seems to actually be substantially certain. This entails an air of disreputability either because of the belief by itself or the particular confidence in it. If someone were to create a model to predict how likely it is that we’re in a simulation, specifically in a stored world history, they may wonder whether cross-temporal fuzziness like this presentiment may be signs of motion compensation, a technique used in video compression, which may also serve to lossily compress world histories. This sounds wild because we’re dealing with unlikely possibilities, but the simulation hypothesis, if true, may have vast effects on the distribution of impacts from interventions in the longterm. These effects may plausibly even magnify small probabilities to a point where they become relevant. Most likely, though, they stem from whatever diverse causes are behind the experimenter effect.1

I imagine that history can also be a guide here as these problems are not new. I don’t know much about religion or history, so I may be mangling the facts, but Wikipedia tells me that the First Council of Nicaea in 325 CE addressed the question of whether God created Jesus from nothing (Arianism) or whether Jesus was “begotten of God,” so that there was no time when there was no Jesus because he was part of God. It culminated as follows:

The Emperor carried out his earlier statement: everybody who refused to endorse the Creed would be exiled. Arius, Theonas, and Secundus refused to adhere to the creed, and were thus exiled to Illyria, in addition to being excommunicated. The works of Arius were ordered to be confiscated and consigned to the flames, while his supporters were considered as “enemies of Christianity.” Nevertheless, the controversy continued in various parts of the empire.

This also seems like a time when, at least in most parts of the empire, a truth-seeking bible scholar would’ve been well advised to consider whether the question has sufficiently vast implication as to be worth the reputational damage and threat of exile that came with engaging with it open-mindedly. But maybe there were monasteries where everyone shared a sufficiently strong bond of trust into one another’s intentions that some people had the leeway to engage with such questions.


  1. Disclaimer: I don’t generally endorse the works of the author. Alexander originated a wealth of helpful ideas, so that I can’t help but cite him lest it seem that I plagiarize them. Unfortunately, (1) the community around his blog contains some insalubrious factions, and (2) until roughly 2016, he himself still published articles that presented issues in a skewed fashion reminiscent of the very dynamics he warns of in Toxoplasma of Rage. I’m adding these disclaimers to avoid the impression that I accept such intellectual wantonness or that it is accepted in my circles. I don’t know whether he still endorses his old approaches. 

  2. How this should be operationalized is still fairly unclear. Ozzie plans to work out more precisely what it is we seek to accomplish. You might as well call it “collaborative truth-seeking,” “improving collective epistemics,” “collaborating on better predictions,” etc. 

  3. I’ve considered a wide range of contributions I could make. Given my particular background, this currently seems to me like my top option. 


Comments

comments powered by Disqus