Reason vs Violence

Now here's the really (to me at least) juicy part: even if someone is a scientific realist, a moral pragmatist/anti-realist might well say the following: how I ought to act in a particular situation more or less follows from the goals I set, but those goals are, until otherwise demonstrated, at my discretion; the situation underdetermines my goals.
To be a moral realist at least traditionally requires that at least some goals are forced on me by reason alone, given the situation.

Ok, so your point is this: How one aught to act is partially determined by ones goals. In the same way that observations are used falsify potentially true scientific explanation, goals are used to falsify potentially true moral explanations. Goals can be thought of as a special kind of evidence that is used eliminate competing moral explanations. However, goals are always at ones discretion. Thus they are always a little bit subjective. Therefore, if morals are determined partially by goals, then those morals must also be a little bit subjective.

But I dont think this argument follows. First, even if goals are "at your discretion", why should this imply that they are 'subjective'. If one is setting goals, one is choosing between competing options. We know this because a rational individual will not just randomly choose a goal and act on it. He will choose multiple different options, and use argument to determine which is best. In other words, a rational individual will try to explain why he should choose one goal over countless others. Thus setting a goal involves choosing it's associated explanation. But then this means that discovering goals is the same as discovering truth. Thus, they are no more subjective than philosophical explanations. Therefore, it follows that morals are not subjective either.

Second, I believe you have the relationship between morals and goals the wrong way. In my opinions, goals are not a special kind of evidence. Rather, they are a special kind of moral. Goals are a subset of the set of all possible morals. But I won't bother trying to justify this just yet.
 
Could you please summarise for me? Every time I read a Stanford Philosophy entry, I need to write a whole essay before posting a response. I don't do this with Popper because I already understand him.
Alright, so I’ll try to expound on metaethical questions a little bit. The basic idea is: how do we ascribe moral values to actions? Can there really ever be such a thing as an objective morality? Metaethics is called “meta” literally because it asks not ethical questions, but questions about ethics itself.

If I understood your preliminary argument correctly, in order to arrive at your conclusion that “violence is always immoral” (or “violence is always bad”) you laid down something like:

1. Violence and reason are two mutually exclusive ways to solve conflicts
2. Conflicts find resolution in truth
3. Reason always aims at truth
4. Therefore reason is always good
5. Therefore (given 1) violence is always bad.

In fact, you have a complementary argument according to which conflicts find resolution in truth because only truth (via innovation, inter alia) can promote survival of the species. And I would agree with you here: if everybody behaved violently to resolve conflicts, the world would be doomed. Survival requires a world in which the majority of people behave reasonably, to continue existing. Given this, I think that you argument is fairly consistent. It does have to make room for the exception of self-defense, but if we accept that self-defense is a mere exception, then it remains consistent.

Now, to take an extreme example, a particularly mischievous metaethicist might ask this question: “How can you show that the promotion of the human species’ survival if objectively good?” This might seem self-evident to you, but is it really objectively evident? It could be argued that humans have done a lot of damage to the planet and to its species. A person could come forward and say: “Survival of the human species is only objectively good for humans; therefore it is really only subjective.” And probably all you’ll be able to do will be to defend your point, with vigor for sure, like you did in the case of suicide above (where you said that using violence to save somebody from killing himself was immoral, even if that person thanked the violent saviour with all his heart a month later, with which I do not agree.) In my definition of “objective”, you will have difficulty showing that human survival is objectively a good thing. This is what metaethics does: it goes back to the very basic assumption you take for granted, and it says “but why do you claim that this must be the case?”

The problem with morality is that, whatever stance you take, it will always be based on a founding axiom that is phrased as a synthetic statement. A synthetic statement is a statement that can’t rely merely on itself to establish validity. It isn’t true a priori, unlike an analytic statement (via the law of non-contradiction) such as “A tall man is tall”, or within modal logic, “A bachelor is single”. These statements can be formally shown to be objectively true a priori. But “violence is always immoral” is a synthetic statement – it requires empirical observation to establish validity. And we have seen that, in fact, empirical observation shows that it is not a valid statement, or only at the price of being extremely qualified. For the statement to be valid, we must exclude the whole realm of self-defense from the definition of violence, as well as (possibly) cases of saving somebody from committing suicide by knocking them out of consciousness. You said you did not agree that the illustration weakened the case of the statement “Violence is always immoral”; but ultimately, you only gave an opinion against it, not an objective argument that cannot be refuted. But this should not surprising: there is no way of establishing the statement as an objective one, because it is synthetic, and because empirics show that it can sometimes be invalidated (i.e. at the very least in the case of self-defense).

In the Critique of Pure Reason, Kant began by asking: “Are synthetic a priori judgements possible?” and you can see where he was going by asking that. He was trying to show that synthetic judgements, such as the statements of morality, can actually have the power of analytical judgements, that is, be objectively true based on their very form, and constitute the basis of a genuine moral law. From this he could derive the categorical imperative (and under the categorical imperative, I believe the statement “Violence is always immoral” is true, full stop). Much of Kant’s reputation as the greatest modern philosopher comes from this attempt. So you can see that these questions about the valuation of morality are not to be taken lightly! ;)
 
wolly.green said:
Ok, so your point is this

As a note, this is an argument I think the moral realist must address, rather than one I myself think kills moral realism. It's one I'd like your thoughts on, not necessarily one I'm advancing as my personal belief.

(After all, I tend towards realism myself)

"at your discretion", why should this imply that they are 'subjective'. If one is setting goals, one is choosing between competing options. We know this because a rational individual will not just randomly choose a goal and act on it. He will choose multiple different options, and use argument to determine which is best.

Being completely random is one thing, being completely free of it is another. In other words, why is it clear there is a single "best" decision determined by the evidence till that point? There may be irrational options that must be ruled out, and worse options that must also be ruled out, and in fact I'd tend to say that's often the case.

However, even if there are irrational options, without further explanation, I don't see how we can rule out that the situation underdetermines (at least sometimes) the goal.

In a word, this would translate to rationality in context of decision-making basically amounting to "reasonable" choices -- that is, ones which don't commit some irrational blunder, but with multiple choices that wouldn't have committed such a blunder. The word "reasonable" signifies possibility -- that is, possible options that wouldn't commit an error.

Said differently, we are rational if we abide by the principle of criticism/error-checking, NOT if we select the "correct" option. Indeed, it's not clear that the process of criticism/error-checking will always rule out all but one option. That's a further claim -- not one I can't entertain, but one I could only believe with further argument.

(In fact, if anything, I'd say this doesn't introduce subjectivity into the realm of truth, because I think part of truth is acknowledging multiple options if the arguments for each is reasonable given the situation. So I still don't believe in disagreements about truth, I just think when bridging the is/ought divide that we can't assume the singular truth will be that there's always one and only one correct course of action.)

I believe you have the relationship between morals and goals the wrong way. In my opinions, goals are not a special kind of evidence. Rather, they are a special kind of moral. Goals are a subset of the set of all possible morals.

I think you might have to be careful here since you're attempting to make a very fine-tuned statement about my view on this relationship, because I noticed you represented my position, e.g., as

However, goals are always at ones discretion.

when I said

charlatan said:
To be a moral realist at least traditionally requires that at least some goals are forced on me by reason alone, given the situation.

which obviously implies I don't think all goals are at one's discretion necessarily, unless I claimed moral realism is false, which of course I didn't since I would tend towards it. Just wanted to save you time in case you end up responding to a position I don't actually hold. In fact, this statement of mine suggests at least some goals are morals.

If you want my own view, it's that goals are a priori (until proof is given otherwise) more general than morals the way I define these, because morals actually are oughts which are effectively ways we have to act, whereas a priori, I'm not sure goals are always forced upon us by present knowledge.
They may be to some extent at our genuine discretion, meaning underdetermined by the situation at hand. So basically, morals the way I use the term are matters of right and wrong, whereas goals may or may not be determined by strict right and wrong.

One might try to argue: OK, but you can make goals a subset of morals by phrasing the goal as "I ought to set out to do one of the rational things I can do, even if there may be more than one."
This looks tempting, but ultimately, you do have to pick one among the rational options as what you're striving for. So it seems to me even this way, you don't escape the underdetermination/get goals strictly as a subset of the morals, unless you don't define morals as a matter of right/wrong and simply define your morals as your goals.

Now here, I submit that this is a mistake -- it's better to make a distinction between values and morals. Values (thus goals) can be either a matter of choice or objective.
Morals (if they exist) are objective, as they're by definition a matter of right/wrong.

In fact, even submitting that each person has his/her own system of morality wouldn't take away the objectivity, for if that person contradicted that system, he/she would be immoral. So it seems to me inescapable that the randomness that could affect picking a goal doesn't affect morals basically by definition, and changing that definition seems pointless.
 
Last edited:
  • Like
Reactions: Ren
Ren said:
And I would agree with you here: if everybody behaved violently to resolve conflicts, the world would be doomed.

As a comment on this, I think social-contract views of ethics seem to work 'pragmatically' for most humans, in that I do think even acting in one's self-interest, the promotion of societal law-and-order tends to give a net benefit even to someone not inclined to kindness, at least on average, due to the benefits of cooperation.

However, the obvious weakness in this beyond pragmatic utility as a foundation for ethics is that it depends on a relative equality of power. If there were a super-intelligent godly powerful being/robot which doesn't depend on our cooperation for its wellbeing, it really makes no difference if it tortures us for the hell of it.
(As a further example, the poor treatment of animals doesn't always pose a grave threat to us, and many ethical theories would aim to go above and beyond in being humane to animals in a way that would invoke principles not following self-evidently from social contract based tenets)
 
Last edited:
  • Like
Reactions: Ren
OK sorry for the add-on, but I really think I need to include it to be sure we're on the same page:

wolly.green said:
Goals can be thought of as a special kind of evidence that is used eliminate competing moral explanations.

I'm not wholly sure why you want to say I'm thinking of goals as evidence (I mean, I get you want to make the analogy that it helps "rule options out" just as evidence helps "rule theories out" by falsifying, but the analogy seems to stop there), but I think even so, the latter part of your statement probably diverges from what I was actually going for.

In particular, moral explanations are about right/wrong (in other words, goals that are forced on us by our knowledge of the situation/that is, goals that we ought to pursue), so I don't think you need any "discretion"-based goals to decide on them. IF there's a right/wrong given a situation, it should be determined by reason alone -- in other words, moral aims are those which are not underdetermined, if they exist.

The other thing is that goals aren't quite analogous to evidence as I presented the pragmatist's argument, because if there's an element of choice in selecting our goals, it isn't quite like evidence that isn't selected by us.
 
Last edited:
I'm under the weather today, or I would expound upon my own thoughts over said conundrum with a biblical perspective, to be honest.. Which may or may not be prudent to the "moralistic" masses. ;)

Perhaps it is good to be under the weather, once in a while. :)

Live to fight another day, eh?
 
  • Like
Reactions: Ren
I'm under the weather today, or I would expound upon my own thoughts over said conundrum with a biblical perspective, to be honest..
"They aren't Christians? Kill them all."

:D
 
"They aren't Christians? Kill them all."

:D
Oh boy... I knew you were coming to get me, Inspector. But you won't take me down that easily! I won't submit!! :tearsofjoy::tonguewink::joycat::dizzy:
 
  • Like
Reactions: Ren
Oh boy... I knew you were coming to get me, Inspector. But you won't take me down that easily! I won't submit!! :tearsofjoy::tonguewink::joycat::dizzy:
Bladibladibla, what I'm noticing is that you have not denied my assertion. So you do want to kill them all after all....
 
Bladibladibla, what I'm noticing is that you have not denied my assertion. So you do want to kill them all after all....
I haven't denied your insertion??

Why Inspector!!! #@!@#@!!:dizzy: I'm shocked at this sort of talk! I never...

:m109:
 
  • Like
Reactions: Ren
Values (thus goals) can be either a matter of choice or objective.
Morals (if they exist) are objective, as they're by definition a matter of right/wrong.

In fact, even submitting that each person has his/her own system of morality wouldn't take away the objectivity, for if that person contradicted that system, he/she would be immoral.
So you are ready to accept the idea that, if two people subscribing to distinct systems of morality were to take opposite sides in a given situation, they would still be behaving objectively from a moral point of view? It seems to me that the objectivity that you're envisaging here would inevitably end up being solipsistic.Tell me if I've completely misunderstood you, which is quite possible.

I'd be interested to know how you would justify the objectivity of values. As for morals, I agree that they must be taken to be objective, though I'm not sure how proof can be given of it. But if we just mean "objective with regarding to a set of axioms that are themselves not self-evident", then I do agree, yes.
 
*Runs for cover* :hushed:
 
  • Like
Reactions: Ren
Ren said:
So you are ready to accept the idea that, if two people subscribing to distinct systems of morality were to take opposite sides in a given situation, they would still be behaving objectively from a moral point of view?

First meta-note: I OFTEN generate arguments someone could think of against me and respond to them, just because I notice people often don't notice when I'm making an argument I actually buy vs one that I think deserves to be taken seriously/responded to.

That said, no I don't ultimately think this kind of relativist morality where different standards are there for everyone in fact works. However, my point was even if each person has a separate system dictating how he/she ought to act himself/herself, even so we'd not get to genuine subjectivity, as we could then simply say morality means each person follows his/her system, and it would still be an objective fact whether each person followed his/her system. Genuine subjectivity would only happen if each person had separate, mutually inconsistent systems for how everyone ought to act -- then obviously we could only evaluate the truth of a moral statement relative a given individual.

This last version is the solipsistic one. The former one is a milder relativism. To be clear, I subscribe to neither myself.


(none of these is too central to my points to wolly, just I'm laying out every last thing that could come to mind for exhaustiveness. The crux of my point to him is that there's no reason there can't be multiple solutions to the goals one might pursue, at least without some argument I'm not aware of. It's no different from a differential equation having more than one solution. Having more than one solution doesn't lead to subjectivity of truth, as we simply accept them all as solutions -- that they're all solutions is an objective fact. This is no worse than accepting that quantum mechanics doesn't become subjective just because it's probabilistic empirically -- what the possible outcomes are+their probabilities is rigidly determined)

So the goal is ultimately to say if there are any oughts forced on us/to shift focus from saying all goals are forced on us by the past circumstances with no element of choice. While one could make the stronger claim, I don't think one even has to do so to ground moral realism. As long as there are some oughts, that would be enough.
 
Last edited:
  • Like
Reactions: Ren
First meta-note: I OFTEN generate arguments someone could think of against me and respond to them, just because I notice people often don't notice when I'm making an argument I actually buy vs one that I think deserves to be taken seriously/responded to.
And I like that, my friend! This is the true philosophical spirit. I love the idea of embracing an argument just from the desire for it to be tested and perhaps proven wrong by others.

(none of these is too central to my points to wolly, just I'm laying out every last thing that could come to mind for exhaustiveness.)
Sometimes the best discoveries take place on the margins ;)

I'm still not sure about your idea of a hypothetically objective morality of every individual person following their own system (would this not allow us to dispense with the notion of morality altogether?) but I'll have to mull this over a little bit first.
 
Ren said:
I'm still not sure about your idea of a hypothetically objective morality of every individual person following their own system (would this not allow us to dispense with the notion of morality altogether?) but I'll have to mull this over a little bit first.

Well here's the point: suppose we define right/wrong to be such that each person has a system of determining certain things he/she ought to do, and is right if the system is followed and wrong if it isn't.
Then whether he/she was immoral in a given circumstance is an objective question -- it's a fact whether the system he/she is supposed to follow was in fact followed.

The proper criticism of this is that it's arbitrary, not that it is solipsistic. That is, how on earth did each person even get assigned this random system he/she is supposed to follow, vs those others follow to determine their own actions?

Solipsistic would be where each person has mutually inconsistent systems for how EVERYONE ought to behave. Then it's simply impossible to define morality as right/wrong without qualifying "relative to X."

I mention these both because people exist who seem to adhere to either one (not because I think they work -- but I can bet there are philosophers who will defend views like these). There are some who say "how ***I*** ought to live is based on my culture, but I won't say that's how another should live -- in fact, the other should live according to his/her culture!!" This allows us to evaluate how any give person ought to live without saying "the truth of how he/she ought to live is relative to observer X/culture X"
 
Last edited:
  • Like
Reactions: Ren
@Ren just for what it's worth, again, the system you're skeptical about is again not one I really endorse as an objective morality for reasons of arbitrariness. I submit that IF we can judge each person's immorality/not by a system he/she is individually supposed to follow, then we get objective answers. But there's no reason to accept such an "if," again because we can then ask "why" should the person follow this system -- how was it arrived at? Unless there are canonical reasons for doing so, we haven't gotten to any kind of moral realism. What we've done is found a scheme for evaluating morality, but without objective reasons for that scheme holding. Given we accept the scheme like a set of axioms, we arrive at an answer that is objective, but the reasons for accepting the scheme itself aren't objective, thus there's a deficiency.
 
  • Like
Reactions: Ren
I submit that IF we can judge each person's immorality/not by a system he/she is individually supposed to follow, then we get objective answers. But there's no reason to accept such an "if," again because we can then ask "why" should the person follow this system -- how was it arrived at? Unless there are canonical reasons for doing so, we haven't gotten to any kind of moral realism. What we've done is found a scheme for evaluating morality, but without objective reasons for that scheme holding. Given we accept the scheme like a set of axioms, we arrive at an answer that is objective, but the reasons for accepting the scheme itself aren't objective, thus there's a deficiency.
I absolutely, completely agree with you on this, @charlatan. It seems that moral objectivity can only be defined with reference to an axiomatic framework which is itself not objectively laid out. In fact, this realization, though it may seem obvious to you and me, can cause its fair share of problems in real life. The vast majority of people unwittingly subscribe to an axiomatic framework which they believe to be objectively arrived at. When you politely question their commitments (perhaps as part of a quest for a genuine moral realism) you get branded a relativist. Oh well... humans.

Can I ask you what you think about Kant's attempt at achieving an objective moral law, if you are familiar with it?
 
Ren said:
Can I ask you what you think about Kant's attempt at achieving an objective moral law, if you are familiar with it?

To be honest, I am not sure I ever understood the whole categorical imperative thing. I don't particularly like the whole consequentialist/deontological divide, as in a weird way, I almost think that it's a play on the is/ought divide (deontology tries to ground things mostly in "oughts" by making duty the primary marker, consequentialism tries to ground things most in "is" -- since it seems to care about states of affairs, i.e. outcomes, being moral or not).
But basically Kant is infamous for being deontological, and between the two I fear that more, as between an unrealistically demanding theory (seems like your job isn't marked out clearly/never done with consequentialism) and a somewhat arbitrary set of rules (which often deontology seems to feel like), I'd err on the previous one.

moral objectivity can only be defined with reference to an axiomatic framework which is itself not objectively laid out.

I should note that I'm not myself committed to the idea that morals exist only relative to a framework, but I do think that's true of some values. I think it is coherent/reasonable to talk of a value existing relative a subject (say sentimental value). However, I think by the very definition/concept of morality as RIGHT/WRONG, we're forced to ask does there exist such a thing independent of preference. Either we say there are merely subjective values and no morals, or not.

However, I certainly would agree that IF we defined/could conceive of morals as evaluated relative an axiomatic framework, we could get answers -- it's just not clear how we're talking in any meaningful sense about morals anymore, because it's unclear how e.g. ZFC set theory isn't a moral theory then. Ultimately it has to address right/wrong in some fashion.


The key to this whole thing is that one has to question the idea that, just because one has a fuzzy idea of something, that it doesn't correspond to a single objective thing. Sometimes maybe there really is any number of random things it could really be. Sometimes there really is a single thing. We arrive at this by asking questions/critical examination.
A priori, science is an enterprise that starts with vague, fuzzy sensory stimuli -- that we don't all see the same, surely (our brain apparatus are roughly similar, but the stimuli are unlikely the same for each of us). That there exists an objective scientific reality behind that is simply the statement that, when we ask questions and attempt to describe the phenomena under question, we realize there seems to be something objectively there, and it even accounts for why we have different fuzzy senses about it.
The mathematical nature of physics seems to help this.

The same can happen of a mathematical fuzzy idea we start with -- you may seek a theory which solves a conjecture using a certain fuzzy principle you think is applicable to the conjecture. There may really only be one such theory....mathematicians actually make such claims, and there does seem to be a fact of the matter as to whether they are right. Basically, just because we start with a fuzzy idea does not mean that there are multiple valid realizations of that idea -- nor does it mean there is necessarily a single one. Depends, the objective truth of which of those two is the case is found by critical examination.


I submit that, once we say "right/wrong" we already have placed a constraint. Unless we genuinely say that right/wrong has no meaning, in which case you could've said right/wrong = pig (ie really you might as well not even ASK the question if there's such a thing as morals), and decided looking for a theory of morality is looking for a theory of pigs. But I submit that if 2 people are agreeing to discuss right/wrong they have certain things in mind that they probably realize overlap to some degree, but aren't sure to what degree they overlap. Just the same as two people have 2 things in mind: fuzzy sensory stimuli of certain balls rolling down inclined planes that are clearly different on first sight, but may correspond to something objective.

Or it could be more like a sentimental value.


(I tend, along with wolly, to be more on the side of truth-seeking than a pragmatist. The idea that things basically just are true within frameworks smells of pragmatism ultimately -- that it's ultimately a matter of choice, cultural convention, etc....the reason I tend to be on the side of truth-seeking is I ultimately find the idea of cultural posits kind of self-defeating, like ultimately even if we say they're posits, that still is a claim of truth/falsehood.....I don't ASSUME there is objective truth, but I do claim that whenever we make theories, we really are trying to seek truth, and that the cultural posits idea seems self-defeating, because EVEN the idea that we're ultimately just pragmatic agents trying to reach agreement/collective consensus seems to be an attempted description of reality.....I am what you'd call a truth-seeker/realist who is sympathetic to the problems pragmatism exposes, in that I'm sympathetic that OK, maybe we are NOT really reaching objective truth...but I find it incoherent to say we're not TRYING to seek some kind of truth, whether we can arrive at any such thing or not....also, even if there are multiple realizations of an idea that we can't whittle down to one, THAT FACT is itself objective, so it doesn't help to go to cultural posits as far as I can see.)
 
Last edited:
  • Like
Reactions: Ren
Back
Top