In The Atheist’s Guide to Reality, Alex Rosenberg suggests that norms like the following constitute a universal human morality:
Don’t cause gratuitous pain to a newborn baby, especially your own.
Protect your children
If someone does something nice to you, then, other things being equal, you should return the favor if you can.
Other things being equal, people should be treated the same way.
On the whole, people’s being better off is morally preferable to their being worse off.
Beyond a certain point, self-interest becomes selfishness.
If you earn something, you have a right to it.
It’s permissible to restrict complete strangers’ access to your personal possessions.
It’s okay to punish people who intentionally do wrong.
It’s wrong to punish the innocent.
Rosenberg does not, however, think that any of these are morally binding or (insofar as they are truth-apt) actually true. Nor, apparently, does he think they reduce to something deeper (such as utilitarianism). Rather, they are more-or-less separate behavioural norms that have become universal among human beings because conforming to them has tended to maximise reproductive fitness.
Although he is a metaethical nihilist, Rosenberg reassures us that we needn’t worry – most people are inclined to conform to these norms whether or not they regard them as true or objectively binding. Those individuals who fail to conform do so out of psychological peculiarity, or perhaps from having false beliefs about factual matters, rather than because of any recondite metaethical views such as moral error theory (or, presumably, relativism, non-cognitivism, or something else that might loosely be considered anti-realist).
You might ask whether Rosenberg is correct about this last point. My own suspicion, though it would take a lot of arguing to be confident, is that he’s probably right: metaethical nihilists are (I suspect) no more likely than the average person to steal the family silver.
But there’s a problem, at least a possible one, and I’ll get to it.
First, though, the norms that I’ve listed might have contributed to reproductive fitness in the environment of evolutionary adaptation. It’s possible that they did so, in part, by contributing to the viability of small bands of hunter-gatherers, and this might have been linked closely to the flourishing of individual members of the group. The individual’s success was the group’s success, and vice versa – and both contributed to the replicative “success” of the relevant strands of DNA. Mybe.
It doesn’t seem too much of stretch that the health, longevity, and happiness of the individual, the individual’s (inclusive) reproductive fitness, and the viability of the hunter-gatherer band as a collective might all have tended to reinforce each other. Some behaviours could contribute to all this simultaneously. We could have certain behavioral norms genetically hardwired into us, as a result. In principle, they might continue to contribute to individual flourishing (in some basic sense such as health, longevity, and happiness), reproductive fitness, and the viability of modern societies. How far they do so will depend, in part, on how far modern environments resemble evolutionary ones in relevant respects.
Perhaps a more plausible picture is that these norms are not actually hardwired (and we can always question what that even means, given that genes can be expressed in different ways in different environments). Nonetheless, under a wide variety of environmental circumstances, societies tend to converge on norms like these and to teach them to children, and perhaps children tend to be primed to learn them. That could be because norms much like these fit in well with whatever more minimalist universal human psychology exists (perhaps this includes certain kinds of responsiveness and sympathy that actually are pretty much hardwired). This might be more the sort of picture that Rosenberg is thinking of – I’m not sure of that.
Either way, let’s take it that psychologically usual people are likely to internalise these sorts of listed norms, and are unlikely to be shaken from them by any kind of meta-ethical anti-realism about moral norms and judgments. Fine so far. There still seems to be a question as to whether, under current circumstances, a hodgepodge of norms like this is really adequate for whatever it is we might want from a system of moral norms. It might help us to get by, and even flourish, when interacting within small groups of people, but is it enough to help us meet our larger goals in a highly complex social world?
Consider an issue like climate change. Assuming that we should, or actually do, care about the conditions under which future generations of human beings, and perhaps other creatures, will live on this strained planet. If so, we should seek to minimise the current process of anthropogenic global warming. Do the norms that Rosenberg lists, which he thinks (perhaps rightly) come easily to us, help us with that? A couple of the very vague norms that he lists might give some guidance, but presumably not a lot. The sorts of political decisions needed to address an issue such as climate change might not come “naturally” or easily to us at all.
Perhaps that’s not news. Perhaps any credible moral theory will predict that the sorts of political changes we need to accomplish various large goals will be counter-intuitive to most people. Still, the counter-intuitiveness can be accounted for on Rosenberg’s view of the world, which may be a point in favour of his view. Furthermore, metaethical nihilists may have no more difficulty than anyone else buying into whatever political initiatives are required to deal with an issue such as climate change. So none of this should count against Rosenberg’s metaethical nihilism.
All the same, I don’t think we can be confident that the morality that comes easily to us is good (i.e. effective) enough, these days, for what we probably want a moral system (viewed as social technology) to deliver. If that is right, Rosenberg appears too complacent. It may be that no moral system is true or objectively binding, but some moral norms might come to us easily and might still do the job that we (most of us) want on small scales. But they won’t necessarily scale up. That’s where I think we have a problem.
[Pssst: My Amazon author site]