This post is about a certain type of normative realism, and a related type of despair (I don’t think “despair” is quite the right word, but I haven’t found a better one). My aim is to question an assumption underlying this realism and this despair, using a toy robot as an analogy to illustrate the point.
I. More than the natural world?
The type of normative realism I have in mind includes commitments in the following vicinity:
Cognitivism: normative judgments are candidates for truth or falsehood.
Judgment non-naturalism: In order for these judgments to be true, there need to be normative facts that are, in some sense, irreducibly “over and above” facts about the natural world.
Metaphysical non-naturalism: There are in fact such non-natural facts.
Characterizing the “natural facts” and the “over-and-aboveness” in question here is difficult. Very broadly and loosely, “natural” here refers to the type of thing that one expects to feature in our best scientific (and generally, non-normative) picture of the world, at all levels of abstraction. The connotation is meant to be one of empiricism, concreteness, scientific respectability, “is” instead of “ought,” “facts” instead of “values,” Richard Dawkins instead of C.S. Lewis, etc — though, perhaps instructively, none of these distinctions are particularly straightforward.
Realists of the type I have in mind think that this familiar (if somewhat hazily defined) naturalist world-picture isn’t enough to get normativity off the ground. Normative facts, they argue, are not reducible to, explicable in terms of, or constituted by any of the non-normative facts that hard-nosed scientist-types would readily accept. They are their own thing — something naturalists are in a deep sense missing.
It’s natural to think that facts of this kind require that there be a normative aspect of reality — a normative “territory,” to which our “map” of normativity aims to correspond. And this has seemed to many an objectionable inflation of our basic ontology — a strange and overly ambitious departure from a much more reliable scientific worldview — and to bring with it various further problems besides (for example, problems to do with explaining why we should expect our intuitive “map” of this territory, even after extensive reflection and revision, to be remotely accurate).
Partly in order to avoid objections about metaphysical inflationism, various writers attempt to carve out variants of non-naturalist realism that are in some sense metaphysically “light” — e.g., forms on which normative facts are not reducible to natural facts, but where they also do not imply any metaphysical commitments incompatible with a broadly naturalist worldview. Parfit (2011), for example, argues that:
“There are some claims that are irreducibly normative in the reason-involving sense, and are in the strongest sense true. But these truths have no ontological implications. For such claims to be true, these reason-involving properties need not exist either as natural properties in the spatio-temporal world, or in some non-spatio-temporal part of reality” (p. 486).
Enoch (2011) also cites Nagel, Dworkin, and Scanlon as attempting similar moves — and this fits with my memory of their views. Skorpuski (2010, Chapter 17) endorses something similar, though he characterizes his normative facts as “irreal” (the fact that different writers characterize a similar position as “realist” and “irrealist” seems to me somewhat telling with respect to its mysteriousness).
I’ve always found this type of move somewhat obscure, and my suspicion is that the relevant needle cannot be stably threaded — that one should end, ultimately, with naturalism, or something more than naturalism, but not with, as it were, both: naturalism about ontology, but non-naturalism about… something else. But I’ve never tried to run the issue to ground.
Some non-naturalist realists, though, are more forthright in their inflationism. Thus, Enoch (2011) writes:
“My Robust Realism wears its ontological commitment on its sleeve. I believe that if we are to take morality seriously, we must go for such an ontologically committed view, precisely as understood by some of the traditional objections to such a view. The thing for us realists to do, I believe, is not to disavow ontological commitment and pretend that this solves (or dissolves) problems for our realism. Rather, we must step up to the plate, and defend the rather heavy commitments of our realism” (p. 7).
Put another way, Enoch’s view openly violates what he calls an “and-that’s-it” clause about science (as well, as presumably, other non-normative forms of inquiry) (p. 135; he cites Shafer-Landau (2003)). Our pro tanto reasons to prevent pain are a robustly additional feature of the world; alien scientists whose theories didn’t acknowledge these reasons would be missing some fundamental aspect of objective reality, as real as the speed of light. In this sense, even as Enoch distinguishes normative debate from naturalistic inquiry, he puts the two on a different kind of par with respect to getting at the nature of the objective world — a parity that I think many normative realists aspire to validate.
(Note that instead of facts about reasons, we can also talk, here, about axiological facts — e.g., facts about goodness and badness — and deontic facts — e.g., facts about rightness and wrongness. The same arguments will hold; but recent analytic philosophy tends to focus especially on reasons. And note as well that the reasons in question need not be moral, or even practical — similar arguments apply to epistemic reasons, like your reasons to believe that there is or isn’t milk in your fridge. Though empirically — and, I think, instructively — many seem less natively inclined to think that epistemic reasons require inflationary metaphysics.)
I feel like I have a stronger grip on Enoch’s view than Parfit’s, so it’s primarily towards Enochian views that this post is directed. Whether a deep and relevant distinction between Parfitian and Enochian views can be sustained is a further question. I’ll report, though, that many Parfitians I know seem to me Enochian both in spirit and practice. The differences come down primarily to how they answer certain recherché questions about truth and what it means to “exist” — questions about which the field’s clarity does not appear to me crystal.
What’s more, in my experience, something like a Parfitian meta-ethic is often taken for granted by many philosophers working primarily in normative ethics, in ways that filter into adjacent discourses (for example, the discourse in the Effective Altruism community, especially in the UK). So while non-naturalist normative realism of some kind might seem strange and implausible to many outside the philosophy community, it’s actually pretty common in analytic ethics.
(I sometimes wonder whether there are effects here similar to ones I’ve heard suggested as explanations for the fact that philosophers of religion tend to be disproportionately theistic; namely, that one is more inclined to devote one’s career to studying something one thinks more substantively real; and more likely to become invested in this reality if it becomes importantly tied to one’s career. If true in ethics, this complicates the sense in which the meta-ethical views of the experts should be deferred to. But I don’t actually know the statistics, in religion or ethics; perhaps the 2020 Phil Paper’s survey — results released soon? — can shed some light. My girlfriend also notes that complications in this vein apply to the views of experts in other fields as well — for example, views about AI timelines).
II. Kinship to nihilism
My current best guess is that Enochian (and also Parfitian) realism is false — though the debate feels slippery. In particular, of the three claims listed above, metaphysical non-naturalism seems to me the least likely to be true: I don’t think there are irreducibly non-natural facts of the relevant kind. But it’s actually judgment non-naturalism that I want to question here.
Many people, I think, are Enochians/Parfitians because they accept something like judgment non-naturalism — that is, they think non-naturalism is the only way to validate what needs validating about normative judgment and discourse. And it seems to me that this leads not just to the embrace of questionable metaphysical conclusions, but also, to the degree those conclusions remain questionable, to a certain kind of backdrop nihilism, and consequent alienation from normative inquiry — a kind of “playing pretend,” in which one participates in a discourse one suspects is ultimately baseless (my sense is that this kind of “playing pretend” is actually reasonably common in philosophy more broadly; but I’m especially familiar with its presence in analytic ethics). It’s this nihilism that I’m especially concerned to push back on here.
The nihilism I have in mind here accepts cognitivism and judgment non-naturalism; but (quite reasonably, in my view) it denies metaphysical non-naturalism. That is, it accepts that our normative judgments can’t be true unless Enoch and/or Parfit are right about metaphysics; but they’re wrong. In this sense, asking whether there is more reason to perform action A or B is like asking whether there is more élan vital in this dog or that dog; or whether this fire or that fire emits more phlogiston. The world just isn’t the way these questions assume (this sort of position is also sometimes called “error theory”).
In my experience, non-naturalist realists tend to be fairly sympathetic towards nihilism of this broad flavor — and understandably so, given how many of their commitments it shares. Indeed, Enoch suggests that error-theorists are “the robust realist’s kindred spirits” (p. 81), and he writes that error theory is his second-best view (p. 81, footnote 75). (This is analogous, I think, to the sense in which dualists and illusionists about consciousness are kindred spirits; they agree about what we are committed to, but disagree about whether it exists).
Indeed, non-naturalist realists will sometimes use the threat of nihilism as a kind of positive argument for metaphysical non-naturalism. E.g., they’ll try to establish something like cognitivism and judgment non-naturalism, and then note that absent metaphysical non-naturalism, these add up to a type of nihilism, and a consequent denial of basic normative claims. Here I see analogies to arguments offered by theists, to the effect that the only way to ground real morality, meaning, beauty, etc, is in a theistic God. Both arguments amount to something like: it’s my non-naturalist metaphysics, or the void.
I’ve often wondered how many metaphysical positions are motivated, subtly or not-so-subtly, by efforts to avoid some void or other; to protect our connection to something we love, some perception of the world that calls out to us in beauty and meaning and importance, from some “it’s just X” associated with naturalism — just atoms, just evolution, just neurons, just computation — where the “just” in question is experienced as somehow draining the world of color, a force for numbness, disorientation, blankness, estrangement. I think some of my own interest in Christianity in college came from this; and so, I think, did some of my later, more secularly-respectable interest in normative realism. The culture of analytic ethics is not particularly friendly to expressions of emotions like fear of the void, and many philosophers do not appear disposed to them; but I wonder where they might be in play, maybe in subtle ways, all the same.
(My girlfriend notes that accepting claims implied by the void can also come with a threat of social punishment — perhaps a more potent and psychologically effective threat than the void itself).
How to understand why naturalism is so readily experienced as draining the world of meaning, beauty, etc is, I think, an important question; I suspect that our implicit metaphysical commitments are far from the only factors in play. My main aim here, though, is to point to a context in which I find that nihilism of the relevant kind — along with related behaviors — does not arise, at least for me (your mileage may vary), even given acceptance of naturalism. To illustrate this point, though, I want to first suggest a certain kind of toy model of judgment non-naturalism.
III. Realism bot
Like others (see e.g. Drescher (2006), chapter 2.4), I sometimes find it useful, in considering philosophical positions — and especially ones that make claims about the structure of human judgment and psychology — to try to design in my head a little toy robot that illustrates the view in question. Here’s my attempt with respect to judgment non-naturalism (for simplicity, I’ve made this robot look a lot like a naive act-utilitarian, but the case could be easily modified).
The robot’s value system is built to assume the existence of a certain envelope buried somewhere in the center of Mount Everest, sealed inside a platinum chest. Inside this envelope, the value system assumes, there is written an algorithm for generating a cardinal ranking of different worlds the robot’s actions could create. The robot’s objective is to maximize the value of the world according to this ranking (this robot is inspired by a similar case suggested by Nick Bostrom in the section on “value learning” in Superintelligence). It searches over possible actions, predicts their consequences, evaluates those consequences according to its best guess about what the envelope says (taking into account its uncertainty about the envelope’s contents), and then chooses on this basis.
Note that we can also imagine the robot as possessing some analog of intuitions, desires, cares, etc, which it treats as evidence about what the envelope says. And we can imagine it learning that these intuitions, desires, and cares arise from a process that bears no causal or explanatory relation to the contents of the envelope — a fact that prompts intense uncertainty in the robot about what the envelope says, since it no longer has any reason to think that its intuitions, desires, cares, etc track the envelope’s instructions at all, and absent access to the envelope, it has nothing else to work with: the ranking could, as it were, be anything. This would be the analog of the central epistemic problem for views like Enoch’s and Parfit’s, pointed at by philosophers like Sharon Street. This problem currently seem to me very dire, but I’ll save that discussion for a different post.
For now, let’s imagine that the robot learns something else: namely, that it looks increasingly likely that the envelope doesn’t exist. The robot has been studying Everest, geology, the history of earth, Nepal/China, etc, and it just doesn’t seem like there’s ever been an envelope inside a platinum chest buried inside the mountain. None of the science or history points in this direction; the robot’s mining expeditions have been coming up empty handed; and the robot has a sneaking suspicion that even if it deconstructed Everest rock by rock, its efforts would be in vain: there is no envelope to be found.
What would such a robot do? Various facts about its cognitive set-up matter here, but on the simple set-up I’m imagining, one thing it wouldn’t do is say something like “ah well, I guess my value system is based on a false premise; I’ll proceed, instead, to act on my desires, cares, intuitions, etc.” That is, it wouldn’t default to some kind of subjectivism or anti-realism as a “second best,” having learned that its first-best is out of reach. It might do this if it had some sort of complex meta-procedure for deciding what to do, or how to modify itself, if its basic ontology is undermined (see, e.g., rescuing the utility function for a proposal about such a procedure in the human case). But on the set-up I’m imagining, it has no such meta-procedure. In considering what to do if the envelope doesn’t exist, it only asks: what would the envelope say to do, if the envelope doesn’t exist?
Perhaps this leads to a kind of error message, and consequent problems for the robot’s cognition as a whole (see e.g. MacAskill’s “The Infectiousness of Nihilism” for some discussion of the problems nihilism poses for decision-making under uncertainty). We can also imagine, though, cases in which the robot starts to just ignore worlds in which the envelope doesn’t exist, and to focus its energies on the increasingly small set of cases in which, somehow, the envelope is actually there, inside the mountain, despite all evidence to the contrary.
This need not lead to distortions of the robot’s beliefs — at least not immediately. We can imagine the robot saying: “I recognize that the envelope is very unlikely to be there; but in that case, no actions are ranked higher or lower according to the envelope. Hence, those worlds are irrelevant to my current decision-making, so I premise my actions (though not my beliefs) on the envelope’s existence.” Perhaps, though, the entanglements between action and belief — entanglements to do, for example, with degrees of freedom involved in patterns of attention, argument, holistic response to evidence, implicit burdens of proof, etc — will lead to distortion after all. And the robot’s actions, regardless, will look increasingly disconnected from the mainstream picture of the world. “Still obsessed with that envelope, huh?” its friends might say, as the robot seeks funding for a project testing whether there could be platinum chests hidden in curled up dimensions. “Always and forever,” the robot replies.
I’ve met a number of philosophers who take something like this “ignore the nihilism worlds” approach to the possibility of nihilism about normativity. Some even admit to assigning nihilism very high credence; but they suggest they can ignore it for practical purposes. But because of an implicit acceptance, I think, of something like judgment non-naturalism, they sometimes treat the main alternative to nihilism as something like Parfitian or Enochian realism. That is, they treat their own psychology as analogous to the robot’s. The need for an envelope is baked in; it’s the envelope or bust.
I think of this as the contemporary, Bayesian version of old fashioned nihilist despair. Absent an explicitly Bayesian epistemology, one might simply form the belief that there is no envelope, and be left with a perennially recurring error message — an endless loop of asking and undermining the question of what the envelope says to do. In the old days, this led to angst, ennui, self-deception, irony, the absurd. But contemporary Bayesians need not succumb, at least publicly, in such dramas. Rather, they can proceed more cheerfully, with the apparent rigors of an adapted expected utility theory as comforts in the face of what they suspect is a quite hopeless condition: “Sure, probably nothing matters. But in that case, nothing I do or feel (including about the fact that nothing matters) matters. I live for the other worlds, however improbable.”
In this sense, the normal connotations of existential despair — angst, depression, practical disorientation — need not apply; nor is any self-deception necessary. Whether these things occur in practice, in real non-robot humans, in subtle or not-so-subtle ways, is a further question.
Note that the point here is not to laugh at the envelope bot — to ridicule it for not giving up on the envelope and just, you know, having fun, making the best of things, not thinking too much about meta-ethical foundations, etc. This is not an “envelope-unless-that’s-somehow-a-problem” bot. It’s an envelope bot, through and through. Its ongoing devotion to the envelope makes total sense.
IV. Am I a realism bot?
As the robot example suggests, I think this is a coherent way for a psychology to be: there are possible beings of which something like judgment non-naturalism is true. Perhaps Enoch, Parfit, and the other philosophers I have in mind are among them. I’m skeptical, though, that all humans are like this. For example, myself.
Here’s the type of case that makes me skeptical (I don’t expect that this will be particularly persuasive to Enochians/Parfitians, but hopefully it will illustrate where I’m coming from). I imagine coming to the end of inquiry, and learning, conclusively, that naturalism is true, and that Enoch and Parfit are wrong about metaphysics. There are no non-natural normative facts or properties. There is only this actual, concrete universe; these planets and stars, this matter and energy, this ocean, these trees, this past, these fellow humans, these animals.
When I imagine this, I don’t imagine myself reacting in the way I expect a creature like the envelope bot above to react. I don’t feel myself going: “OK, but maybe actually it turns out that non-naturalism is true after all, so let’s focus on that possibility; Bayesians are never 100% on anything, right?” Nor do I expect to feel totally at a loss in choosing how to act, or to be getting a kind of “error message” in weighing different possibilities.
I imagine, for example, that a friend and I come across a deer with its leg caught in some barbed wire, writhing in the pain, struggling to get free. We know that there is no non-natural reality or set of facts to offer us any guidance about what to do. There is just this deer, this barbed wire, us, this choice. But this deer wants to be free; we can see it in its eyes, crazed with fear and pain. And we, too, want this deer to be free. Let’s free the deer. We free it. It bounds away, into the forest. We’ve hurt our hands on the wire. We go home and put bandages on them.
Did we get any “points” in the universe’s eyes? Do the non-natural normative facts approve? Did we move the world up a notch on the One True Ranking beyond the world? No. But this doesn’t feel very relevant. We freed a deer. It might’ve died. Now it’s out there in the forest.
Or I imagine that my friends and I are alone in the universe on a desolate planet; and after learning definitively of the truth of naturalist metaphysics, we have a chance to create a world of joy, love, creativity, and understanding, and to create more people to join us in it; or, alternatively, to leave everything blank — just rocks and emptiness, forever. Are we at a loss for what to do? Must we assume some other metaphysics in order to start the discussion, on pain of error messages? I don’t think so. I imagine us asking: “naturalism or no: shall we create Utopia, friends?” Hell yes.
I think it’s helpful to try to consider these cases without feeling some subtle underlying pressure to give a socially acceptable answer. If it helps, you can imagine yourself definitively alone in the universe (modulo the deer) when encountering deer, or in choosing between Utopia and emptiness. There is no one in the imagined world who will punish or judge you if you leave the deer stuck in the wire, or if you choose cold rocks over bright joy. You can choose either way — what will happen as a result is just: what will happen. The choice is entirely yours. You can think as long as you’d like.
For simplicity, I’ve been avoiding normative language in characterizing these cases, but this isn’t necessary for making the point. Indeed, it’s natural to say not just “Hell yes, let’s create Utopia” but “Hell yes, that sounds awesome”; to ask, not just “shall we free the deer?” but “should we?” Indeed, I imagine us reverting very quickly to using such language if for some reasons we gave it up, regardless of our naturalism, and without self-deception or illusion.
That is, the point here isn’t about what would happen if we learned that nihilism is true, and all claims about shoulds, reasons, goodness, etc have no basis whatsoever. I actually think that analogous (though somewhat more complicated) arguments apply there as well, but that’s a somewhat different discussion. In this respect, I’m thinking in particular of Yudkowsky’s “What Would You Do Without Morality“; and of a post by (I think?) Scott Alexander, which I’m unfortunately not finding easy to locate, on a very similar theme (I think it involved an object that somehow made everything the person who possesses it does morally right? Alexander then asks, as I recall, if possessing such an object would cause you to go around stepping on kittens, killing, stealing, etc. If you, internet, know which post I’m talking about, I’d be curious). I think of the general point I’m making here as basically the same as the one I take Yudkowsky and Alexander (and Soares, in suggesting that you first drop your “shoulds,” then see what happens to your deliberation) to be making, though applied to non-naturalism in particular.
Leaving nihilism aside, then, the question here is whether learning the truth of naturalism in particular leads us to the patterns of behavior that the toy model of judgment non-naturalism above suggests. And at least in my own case, I think the answer is no.
Others may differ. It may be that in worlds in which they learn that naturalism is true, they find themselves torn between Utopia’s appearance of value, and their vivid knowledge that in fact, there is nothing in Utopia that they ultimately seek. Utopia, for them, was always the promise of something more than e.g. joy, love, creativity, understanding — it was the promise of those things, with an extra non-natural sauce on top. A Utopia with no sauce is an empty shell, the ethical analog of a phenomenal zombie. It looks right, but the crucial, invisible ingredient is missing.
How one reacts here is ultimately a question of psychology, not metaphysics. Some robots are envelope-ranking-maximizers, and they won’t change their goals just because the envelope probably doesn’t exist. But I think we should be wary of assuming too quickly that we’re like this.
Here, again, I’m thinking in part of an analogies with theism. To the person convinced that without God, there would be no basis for helping each other, not killing each other, etc, it’s natural to ask, à la Yudkowsky: what would you really do, if you actually learned that God doesn’t exist? And to remind them that even without God, if you’re hoping for a world where we help each other, don’t murder, etc — you can try to create that world anyway. My sense is that judgment non-naturalism warrants a similar response. That is, to someone convinced that something other than the natural world needs to exist in order for our deliberation and decisions re: helping the deer, creating Utopia, etc to make any sense, it’s natural to ask what it would really be like for them to approach those decisions given vivid knowledge that the natural world is all there is. Would their deliberation in fact collapse into nonsense?
If anything, it feels the opposite way to me: it’s clearer to me what I’m doing, in deciding whether to help the deer, create Utopia, etc, given the posited metaphysical clarity. I feel somehow more grounded, more like a creature in the real, raw, beautiful world, with my eyes open, taking full responsibility for my actions, and less like I’m somehow playing pretend, seeking guidance from some further realm or set of facts that I secretly suspect does not exist.
This is very far from a conclusive argument against judgment non-naturalism. The envelope-bot is only one toy model; how exactly it responds to knowledge that the envelope doesn’t exist isn’t actually specified by my description of its decision-procedure; and humans could differ from this bot in myriad ways.
What’s more, how you respond to explicit acceptance of a metaphysical position is at best only a fallible indicator of the metaphysical view that your underlying and often unconscious cognitive processes are committed to (to the extent that it makes sense at all to ascribe metaphysical commitments to these processes at all — I’m not actually sure). And I’m sufficiently unclear about what Parfit and Enoch’s metaphysical views ultimately amount to that I may not even be able to accurately or adequately imagine them being true or false (indeed, a general problem I have with non-naturalist realism, especially Parfit’s version, is that I don’t actually have a clear picture of the model of the world being argued for — what I should “add” or “take-away” from my representation of the world in imagining the difference between a naturalist and a non-naturalist metaphysic).
What’s more, I do think metaphysical and epistemic questions about positions like Parfit’s and Enoch’s can get tricky. I don’t have clear metaphysical and epistemic views about e.g. mathematics, modal discourse, etc — the type of “partners in guilt” that non-naturalists about normativity will sometimes appeal to. And more broadly, I suspect that our current philosophical understanding of these issues is sufficiently rudimentary that future philosophers will have much to teach all parties to the present debate, even if some were closer to the truth than others. I expect the naturalists will be closer, but I’m open to having misunderstood things in pretty fundamental ways regardless, and I tend to think that we won’t go too wrong leaving many meta-ethical questions — especially ones that don’t make concrete predictions about what e.g. different AI systems will do — to future philosophers to really settle.
Mostly, what I want to question here is a kind of “non-naturalist realism or bust” attitude that I’ve encountered in the ethics community in different contexts (I think Parfit’s work may have been important here), and to which I was at one point more sympathetic than I am now. I’m especially concerned with the extent to which I expect this position to lead, increasingly, to “bust” — even if the “bust” worlds in question get ignored, à la the robot above, and meaningful choice is thought relegated to an increasingly small slice of probability space.
I care about this partly because I think there are subtle and not-so-subtle harms to premising your life and action on views that in your gut and heart you don’t actually believe. One’s full capacities, I think, are not engaged; one becomes a kind of victim of one’s own abstractions, an organization telling itself a story that most of its employees find hollow and artificial. Something about life becomes thin and flimsy and brittle; one suspects that one can’t look at it too closely; one suspects that if one really thought it out, it would turn to ashes in one’s hands; best, then, not to think too hard.
I’m not saying that this is how non-naturalist realists like Parfit and Enoch actually relate to their position. But I noticed flavors of something like this in my own historical relationship to meta-ethics, and it has felt correspondingly powerful and important to re-focus on living and acting in what actually seems like the real world.
To me, what this currently looks like is a place where the choice about what sort of world to create is in a deep way on us. Just as there is no theistic God to tell us what to do, so there is no further realm of normative facts to tell us, either. We have to choose for ourselves. We have to be, as it were, adults. To stand together, and sometimes alone, amidst the beauty and horror and confusion of the world; to look each other and ourselves in the eye; to try to see as clearly as possible what’s really going on, what our actions really involve and imply and create, what it is we really do, when we do something; and then, to choose.
This isn’t to say we can’t mess up: we can — deeply, terribly, irreparably. But messing up will come down to some failure to understand our actions, ourselves, and each other, and to act on that understanding; some veil between us and the real world we inhabit; not some failure to track, in our decisions, the True Values hidden in some further realm. And when we don’t mess up – when we actually find and build and experience what it is we would seek if we saw with full clarity — what we will get isn’t the world, and therefore something else, the “goodness” or “value” of that world, according to the True Values beyond the world. It will be just the world, just what we and others chose to do, or to try to do, with this extraordinary and strange and fleeting chance, this glimpse of whatever it is that’s going on.
Nietzsche writes of an “open sea.”
“Finally the horizon seems clear again, even if not bright; finally our ships may set out again, set out to face any danger; every daring of the lover of knowledge is allowed again; the sea, our sea, lies open again; maybe there has never been such an ‘open sea.'”
Parfit quotes him approvingly. Their pictures of the openness in question seem different. Nietzsche was celebrating the death of the old god; Parfit, the future’s possibilities, in the context of a meta-ethics that I suspect would seem, to Nietzsche, a new god. But the image seems to me apt regardless.