Anthropic Capture, Intelligence, and Trees
I present a rather speculative argument whose most likely implication is that if we’re in a simulation, then the root is occupied by a superintelligence, and probably not a value-aligned one. If you’re new to the topic, this is probably not a good introduction, since I mostly wrote it for myself so not to forget it all. I recommend Nick Bostrom’s Superintelligence instead.