In his correspondence with Princess Elizabeth and Queen Christina, as well as in parts of the Passions of the Soul, Descartes provide the beginnings of a theory of ethics. Descartes argues that the supreme good, or the end that one ought to pursue in all of one’s actions, is virtue. The latter is understood by Descartes as a matter of using one’s absolutely free will as well as one can. In the paper we try to shed some light on what this Cartesian notion of virtue more specifically entails.
It is possible for persons to deserve evaluative attitudes such as admiration and disdain. There is an apparent asymmetry between positive and negative attitudes, however. While the latter appear to be subject to what I will call a "control requirement," the former do not appear to be so subject. I attempt to explain away this asymmetry by appeal to pragmatic factors.
Standard welfarist axiologies do not care who is given what share of the good. For example, giving Wlodek two apples and Ewa three is just as good as giving Wlodek three and Ewa two, or giving Wlodek five and Ewa zero. A common objection to such theories is that they are insensitive to matters of distributive justice. To meet this objection, one can adjust the axiology to take distributive concerns into account. One possibility is to turn to what I will call Meritarian axiologies. According to such theories, individuals can have a claim to, deserve, or merit, a certain level of wellbeing depending on their merit level, and the value of an outcome is determined not only by people’s wellbeing but also by their merit level.
Substantial metaphysical theory has long struggled with the question of negative facts, facts capable of making it true that Valerie isn’t vigorous. This paper argues that there is an elegant solution to these problems available to anyone who thinks that there are positive facts. Bradley’s regress and considerations of ontological parsimony show that an object’s having a property is an affair internal to the object and the property, just as numerical identity and distinctness are internal to the entities that are numerically identical or distinct. For the same reasons, an object’s lacking a property must be an affair internal to the object and the property. Negative facts will thus be part of any ontology of positive facts.
Wlodek Rabinowicz suggested in an e-mail conversation (2001) to me that one might be able to use a particular Hats Puzzle to make a Dutch Book against a group of individually rational persons. I present a fanciful story here that has the same structure as Rabinowicz’s Dutch Book.
On the face of it both aggregation and deliberation represent alternative ways of producing a consensus. I argue, however, that the adequacy of aggregation mechanisms should be evaluated with an eye to the effects, both possible and actual, of public deliberation. Such an evaluation is undertaken by sketching a Bayesian model of deliberation as learning from others.
The object of this paper is to explore the intersection of two issues – both of them of considerable interest in their own right. The first concerns the role that feasibility considerations play in constraining normative claims – claims, say, about what we (individually and collectively) ought to do and to be. This issue has particular relevance for the confrontation of moral philosophy with economics (and social science more generally). The second issue concerns whether normative claims are to be understood as applying only to actions in their own right or (also) non-derivatively to attitudes. Both these issues are ones on which different theorists have taken quite different stands, though we think there is more to be said about them. The point of juxtaposing them lies in the thought that actions and attitudes may be subject to different feasibility constraints – and hence that how we conceive of the role of feasibility in an account of normativity will depend in part on how we conceive of the role of actions and attitudes in normative theorising.
Expressions such as ‘morality requires’, ‘prudence requires’ and ‘rationality requires’ are ambiguous. ‘Morality’, ‘prudence’ and ‘rationality’ may refer either to properties of a person, or to sources of requirements. Consequently, ‘requires’ has a ‘property sense’ and a ‘source sense’. I offer a semantic system for its source sense. Then I consider the logical form of conditional requirements, in the source sense.
The so-called Substituted Judgment Standard is one of several competing principles on how certain health care decisions ought to be made for patients who are not themselves capable of making decisions of the relevant kind. It says that a surrogate decision-maker, acting on behalf of the patient, ought to make the decision the patient would have made, had the latter been competent. The most common way of justifying the Substituted Judgment Standard is to maintain that this standard protects patients’ right to autonomy, or self-determination, in the situation where they are no longer able to exercise this right on their own. In this paper we question this justification, by arguing that the most commonly suggested moral reasons for allowing and encouraging people to make their own choices seem not to apply when the patient’s decision-making is merely hypothetical. We end with some brief sketches of possible alternative ways of justifying the Substituted Judgment Standard.
Mill’s qualitative hedonism has been subject to much debate. It was formulated to strike a balance between classical hedonism and perfectionist conceptions of happiness and many have thought that either it is an abandonment of hedonism or that it collapses into a mere observation on what actually provides the greatest quantity of pleasure. Here it is suggested that a doctrine along the lines suggested by Mill might be defended if the role of our preferences is understood in terms of some ideas defended in a couple of papers by Wlodek Rabinowicz.
How do we determine the well-being of a person when her preferences are not stable across worlds? Suppose, for instance, that you are considering getting married, and that you know that if you get married, then you will prefer being unmarried, and that if you stay unmarried you will prefer being married. The problem here is to find a stable standard of well-being when the standard is set by preferences that are not stable. In this paper, I shall show that the problem is even worse: incoherence threatens if we accept both that preferences determine what is better for us and that desires determine what is good for us. After I have introduced a useful toy model and stated the incoherence argument, I will go on to discuss a couple of unsuccessful theories and see what we can learn from their mistakes. One of the most important lessons is that how you would have felt about a life had you never lead it is irrelevant to the question of how good that life is for you. What counts is how you feel about your life when you are actually leading it.
I argue that indicative conditionals are best viewed as having partial truth conditions: “If A, B” is true if A and B are both true, false if A is true and B is false, and lacks truth value if A is false. The truth conditions are shown to explain a variety of important phenomena regarding indicative conditionals, including Adams’ Thesis about the assertability conditions of conditionals, and how indicative conditionals embed in more complex constructions. In particular, the truth conditions are shown to provide the semantic basis for characterising several distinct logics of indicative conditionals, of which the logic of assertion is the main focus of the paper.
Ruth Chang has defended a concept of “parity”, implying that two items may be evaluatively comparable even though neither item is better than or equally good as the other. This paper is an attempt to make this notion of parity more precise, by defining it in terms of the relation “better than”. Given some plausible assumptions, the suggested definiens is shown to state a necessary and sufficient condition for parity.
An agent is autonomous only if she governs her life in accord with her values. If our values had not been shaped by our society’s culture, and by the values of our family and friends, we might not have had any values at all. Hence, living in a society seems to be a precondition of autonomous agency. This is ironic, for a person’s values can be so influenced by the culture of her society that it can seem doubtful that a life governed by them could qualify as self-governed, no matter what else might be the case. The solution to this little puzzle is to recognize that the socialization of a child creates an agent capable of autonomy. A child does not autonomously choose her values, but this does not undermine her autonomy. An adult who does not exercise control over changes in her values, however, might be less than fully autonomous.
A paragraph-by-paragraph commentary on the first chapter of W.D. Ross’s first major work on ethics, The Right and the Good (1930). Each paragraph is reproduced, and a summary provided. Some issues of interest are then pursued, including the nature of definition, the relationship of rightness and moral goodness, the ‘ought implies can’ principle, and whether it is coherent to claim that one has a duty to be motivated by a sense of duty.
Some objections to Geach’s claim that ‘good’ is always essentially attributive are discussed and rejected.
Some reflections on Hussein Kassim’s article on the concept of race in A Companion to Genethics. Kassim claims that modern genetics has shown that there are no such things as human races and that there is no biological entity that corresponds to the concept of race used in everyday language. I believe that his arguments are problematic and that he is wrong when he assumes that there is no morally innocent use of the race concept.
There is a lively debate about the descriptive concept of happiness. What do we mean when we say (using the word to express this descriptive concept) that a person is “happy”? One prominent answer is subjective local desire satisfactionism. On this view, to be happy at a time is to believe, with respect to the things that you want to be true at that time, that they are true. Wayne Davis developed and defended an interesting and sophisticated version of this view in a series of papers. I present, explain, and attempt to refute his version of the theory. I then sketch what I take to be a better theory of happiness. The proposed theory is a form of intrinsic attitudinal hedonism.
The so called ‘buck-passing account of value’ claims to offer an account to the effect that the value, or goodness, of things is amenable to the existence of some natural properties giving us reason for certain pro-reactions. This essay argues that we hardly can accept that. The offer is obscure already when we consider the relation assumed in BPA to obtain between value and reason-giving. Once the various forms of this relation are distinguished BPA appears implausible or not more than a research program. As to the much debated ‘wrong kind of reason argument’ that Wlodek Rabinowicz and Toni Rønnow-Rasmussen have developed, it is shown to be a trap for buck-passers.
It is argued here that individuals should never be held responsible for being lucky or unlucky, so that the notion of option luck is prima facie unacceptable. But applying the Conditional Equality and Egalitarian Equivalence criteria to problems of allocation under risk provides a rationale for policies which are sometimes similar to policies based on the notion of option luck. Dworkin's concept of option luck plays a key role in the development of his idea of a hypothetical insurance market that can help in calibrating the transfers between unequally talented individuals. The allocation criteria analyzed here enable us to make a critical examination of the performance of insurance markets in general, and to show that Dworkin's hypothetical insurance is highly problematic.
In this paper I discuss three important, distinct phenomena. In my terminology, one is common knowledge of co-presence. Another is mutual recognition. I shall spend the most time on that. The third phenomenon is joint attention. As we shall see, common knowledge of co-presence is essential to mutual recognition; this, in turn, is essential to joint attention
This paper defends a limited thesis to the effect that there are reasons to be rational. The thesis that is defended is limited this way: As long as you see reasons as ”primitive”, and you see normative facts as something that cannot be derived from ”rational” facts, then you ought to recognize that among the reasons there are, there are also reasons to be rational. The thesis is defended by articulating two distinct and plausible conceptions of rationality that go well with the present view on reasons, and by considering whether there are reasons to be rational on those conceptions of rationality.
In his recent paper ‘Analyticity: An Unfinished Business in Possible-World Semantics’ Wlodek Rabinowicz takes on the task of providing a satisfactory definition of analyticity in the framework of possible-worlds semantics. As usual, what Wlodek proposes is technically well-motivated and very elegant. Moreover, his proposal does deliver an interesting analytic/synthetic distinction when applied to sentences with natural kind terms. However, the longer we thought and talked about it, the more questions we had, questions of both philosophical and technical nature. Hence the idea of this little paper – for how better to honor a philosopher than by trying very hard to criticize him? After quickly running over some background in possible worlds semantics and setting out Wlodek's proposal against that background, we shall bring up and discuss our questions in sections 3 – 5. In the final section, we shall also make a stab at a different solution to the problem, making use of our own earlier idea of relational modality.
Is it possible to give a justification of our own practice of deductive inference? The purpose of this paper is to explain what such a justification might consist in and what its purpose could be. On the conception that we are going to pursue, to give a justification for a deductive practice means to explain in terms of an intuitively satisfactory notion of validity why the inferences that conform to the practice coincide with the valid ones. That is, a justification should provide an analysis of the notion of validity and show that the inferences that conform to the practice are just the ones that are valid. Moreover, a complete justification should also explain the purpose, or point, of our inferential practice. We are first going to discuss the objection that any justification of our deductive practice must use deduction and therefore be circular. Then we will consider a particular model of justificatory explanation, building on Kreisel’s concept of informal rigour. Finally, in the main part of the paper, we will discuss three ideas for defining the notion of validity: (i) the classical conception according to which the notion of (bivalent) truth is taken as basic and validity is defined in terms of the preservation of truth; (ii) the constructivist idea of starting instead with the notion of (a canonical) proof (or verification) and define validity in terms of this notion; (iii) the idea of taking the notions of rational acceptance and rejection as given and define an argument to be valid just in case it is irrational to simultaneously accept its premises and reject its conclusion (or conclusions, if we allow for multiple conclusions). Building on work by Dana Scott, we show that the last conception may be viewed as being, in a certain sense, equivalent to the first one. Finally, we discuss the so-called paradox of inference and the informativeness of deductive arguments..
I argue that the analysis of different kinds of cooperation will benefit from an account of the cognitive and communicative functions required for the cooperation. I investigate different models of cooperation in game theory – reciprocal altruism, indirect reciprocity, cooperation about future goals and conventions – with respect to their cognitive and communicative prerequisites. The cognitive factors considered include recognition of individuals, memory capacity, temporal discounting, anticipatory cognition and theory of mind. The communication considered ranges from simple signalling to full symbolic communication.
Since neither a human mind nor a computer can deal directly with infinite structures, well-behaved models of belief change should operate exclusively on belief states that have a finite representation. Three ways to achieve this without resorting to a finite language are investigated: belief bases, specified meet contraction, and focused propositional extenders. Close connections are shown to hold between the three approaches. Keywords: finitude, finiteness, belief change, belief base, specified meet contraction, sentential selector, propositional extender, Grove spheres
From a cognitive perspective, this paper summarises a number of theoretical and applied studies conducted by my colleagues and myself on the topic 'interaction with new media'. The focus lies on the users' behaviour: visual information gathering, interaction with the multimodal interface, browsing strategies and attentional processes during hypertext navigation. In addition, we also look at users' expectations and attitudes towards the medium. There are several methods that can be used in order to describe user behaviour and postulate a number of underlying cognitive mechanisms. In the following, I will show how eye-tracking data supplemented by simultaneous or retrospective verbal protocols, keystroke logging, and interviews can help us to investigate users' behavior, the rationality behind this behaviour, and users' attitudes and expectations. The clusters and integration patterns discovered in empirical studies can be used in developing a new generation of multimodal interactive systems within human-computer interaction.
According to prioritarianism, roughly, it is better to benefit a person, the worse off she is. This seems a plausible principle as long as it is applied only to fixed populations. However, once this restriction is lifted, prioritarianism seems to imply that it is better cause a person to exist at a (positive) welfare level of l than to confer l units on a person who already exists and is at a positive welfare level. Thus, prioritarianism seems to assign too much weight to the welfare of possible future people. It is in this respect even more demanding than total utilitarianism. However, in this article, I argue that all told, prioritarianism is in fact more plausible than total utilitarianism even when it comes to population ethics.
The logical empiricists in Vienna and their Swedish counterparts in Uppsala shared a scientific ethos that underlined the philosophical academics as representatives for universalism, disinterest, professional loyalty, organized scepticism and public interest. Rudolf Carnap, Axel Hägerström and Ingemar Hedenius regarded themselves as intellectuals, offering their philosophical tools to society. However, when the scientific ethos was articulated by Robert Merton in 1942, the circumstances had drastically changed. The European tradition was left behind. However, the claim is that neither the professionalism, nor the specialized epistemology in analytical philosophy necessarily alienated the philosopher from the public. The gap occurred when the epistemology ceased to be cultural meaningful, as a part of the spirit of the time. The modernistic spirit promoted the ethos of intellectuality. In the 1960s a new ethos took over: the philosophers as purely academic professional experts, in US and Sweden.
There is a striking gap between the moral standards that most of us endorse, and the moral standards that, in practice, we seem able to live up to. This might seem as hypocritic. Wlodek has often suggested in discussions, however, that endorsing high moral standards, a “Sunday school morality” so to speak, can make us behave better than we would otherwise do, even if we cannot hope to achieve perfection. I present an argument from evolutionary game theory to support this Sunday school thesis.
James Griffin and others have considered a weaker form of superiority in value a possible remedy to the Repugnant Conclusion. In this paper, I demonstrate that, in an additive context, this weaker form collapses into a stronger form of superiority. And in a non-additive context, it does not necessarily amount to a radical value difference at all. I then spell out the consequences of these results for different interpretations of Griffin’s suggestion regarding population ethics. None of them comes out very successful, but perhaps they nevertheless retain some interest.
An ideal of authenticity is deeply imbedded in western culture. Following Harry Frankfurt, authenticity might be characterized as fidelity to the person’s essential nature. This suggestion fails, no matter whether personal identity is construed in a narrow, metaphysical sense or in the broader sense involved in talk of individual self-conceptions and social groups. Another Frankfurtian approach would appeal to self-reflective attitudes, but this falls pray to counterexamples as well. A third approach sees personal identity as a complex empirical fact. This fits the phenomena better, but leaves no room for fidelity to self. I conclude that the modern ideal of authenticity does not lend itself to a coherent and rationally compelling account, and will more profitably be seen as a mixture of logically unrelated concepts.
Wlodek Rabinowicz (2002) has challenged the thesis that deliberation as to what one is to do and prediction as to what one will do cannot be jointly undertaken coherently. He maintains that even if it were true, it would not have the kind of relevance for theories of rational choice and game theory that some of its proponents claim it has. He also claims that the proponents have not made a compelling case for the thesis. I disagree. I will devote my tribute to Wlodek on his 60th birthday to an effort to respond to his cogently presented essay. In doing so, I carry on a tradition of long standing where Wlodek and I maintain our friendship by challenging each other’s work.
Luck-egalitarianism is often formulated as the view that it is in itself bad for some to be worse off than others through no fault or choice of their own. This formulation is surprisingly ambiguous. When these ambiguities are sorted out it can be seen that choice and responsibility plays a different role in egalitarian justice than it is normally assumed. Specifically, there are cases where each member of a group is worse off through no choice or fault of his own and yet this is not bad from a luck-egalitarian point of view because the group is worse off through the choices or faults of its members. Moreover, there are cases where each member of a group is worse off through his own choice or fault and yet this is bad partly because the group is worse off regardless of the choices or faults of its members.
Most areas of logic can be approached either semantically or syntactically. Typically, the approaches are linked through a completeness or representation theorem. The two kinds of theorem serve a similar purpose, yet there also seems to be some residual distinction between them. In what respects do they differ, and how important are the differences? Can we have one without the other? We discuss these questions, with examples from a variety of different logical systems.
In this paper I argue that the infinite regress of resemblance is vicious in the guise it is given by Russell but that it is virtuous if generated in a (contemporary) trope theoretical framework. To explain why this is so I investigate the infinite regress argument. I find that there is but one interesting and substantial way in which the distinction between vicious and virtuous regresses can be understood: The Dependence Understanding. I argue, furthermore, that to be able to decide whether an infinite regress exhibits a dependence pattern of a vicious or a virtuous kind, facts about the theoretical context in which it is generated become essential. It is precisely because of differences in context that he Russellian resemblance regress is vicious whereas its trope theoretical counterpart is not.
According to a theorem recently proved in the theory of logical aggregation, any nonconstant social judgement function that satisfies independence of irrelevant alternatives (IIA) is dictatorial. We show that the strong and not very plausible IIA condition can be replaced with a minimal independence assumption plus a Pareto-like condition. This new version of the impossibility theorem likens it to Arrow’s and arguable enhances its paradoxical value.
Many mental states and acts, it is said, are intentional mental states and processes. They have a direction. Sometimes such states and processes go wrong, they miss their target. Sometimes they hit their target. Two ways of understanding these possibilities are the theory of satisfaction conditions and the theory of correctness conditions. I compare the merits of the two theories as as accounts of intentionality and argue for the superiority of the theory of correctness conditions. Not all mental states and processes which enjoy the property of intentionality can go wrong. Knowledge, in all its forms, cannot go wrong. The question then arises: what relation if any is there between mental states and processes which have correctness conditions and those which do not have correctness conditions ? I argue that the intentionality of states which have correctness conditions should be understood in terms of the intentionality of knowledge.
Contemporary philosophy of health has been quite focused on the problem of determining the nature of the concepts of health, illness and disease from a scientific point of view. Some theorists claim and argue that these concepts are value-free and descriptive in the same sense as the concepts of atom, metal and rain are value-free and descriptive. To say that a person has a certain disease or that he or she is unhealthy is thus to objectively describe this person. On the other hand it certainly does not preclude an additional evaluation of the state of affairs as undesirable or bad. The basic scientific description and the evaluation are, however, two independent matters, according to this kind of theory.
According to T.M. Scanlon’s ‘buck-passing’ account of value, for something to be valuable is not for it to possess value as a simple and unanalysable property, but rather to have other properties that provide reasons to take up an attitude in favour of it or against it or to act in certain ways in regard to it. Jonathan Dancy has argued that passing the buck threatens to resolve prematurely the debate between consequentialism and deontology in favour of consequentialism (Dancy 2000). In this paper I shall discuss this claim. Section II suggests that Dancy’s objection is well-founded, but not in the precise sense he imagines. Dancy’s instructive criticism raises another intriguing question that will be dealt with in section III. The question is this: given that the buck-passing account of value is accepted, to what extent can we draw a distinction between consequentialism and deontology? The way in which Scanlon might answer this question would nullify Dancy’s worry, but it suffers from other problems. Ultimately, I shall suggest that the buck-passing account does reduce the conceptual space for the consequentialism/deontology distinction, but that the ways in which it does so are tolerable. There remain a number of useful distinctions between normative theories that the buck-passer is entitled to draw. Some of those capture important aspects of what intuitively divides consequentialists and deontologists.
It is usually taken for granted that a theory of belief revision should describe justified changes from one belief state to another belief state where the output state is uniquely determined given the input state and the new information. This uniqueness assumption has been questioned by Lindström and Rabinowicz whose theory of relational belief revision allows for the result of belief revision to be indeterminate in the sense that there may be many possible end states that are equally rational. The main aim of the paper is to inquire into the possible motives behind this generalization of the standard functional setting.
This paper argues that Wlodek Rabinowicz and Toni Rønnow-Rasmussen are wrong in thinking that what they call the ‘Wrong Kind of Reasons’ problem presents a serious problem for the idea that the fact that there are reasons to have a pro-attitude towards an object implies that it is valuable. It seems a serious problem to them because they mistakenly reject the view that some reasons that in everyday language are described as reasons for an attitude are really reasons for wanting, intending or trying to have it. This view is here defended against Rabinowicz’ and Rønnow-Rasmussen’s attack by an account of what attitude a reason is a reason for in terms of what direct response it justifies as an outcome of a piece of reasoning.
According to Jon Elster, mechanisms are frequently occurring and easily recognizable causal patterns that are triggered under generally unknown conditions or with indeterminate consequences. In the absence of laws, moreover, mechanisms provide explanations. In this paper I argue that Elster’s view has difficulties with progressing knowledge. Normally, filling in the causal picture without revising it should not threaten one’s explanation. But this seems to be Elster’s case. The critique is constructive in the sense that it is built up from a discussion of a mechanism that might explain ‘unwarranted’ risk taking in connection with swimming—a mechanism that is mirrored in the proverb: The best swimmers drown.
As Held, May, Tännsjö and others have argued, it can be plausible to hold loosely structured sets of individuals morally responsible for failing to act collectively, if this would be needed to prevent some harm. On the other hand it is commonly assumed that (collective) agency is a necessary condition for (collective) responsibility. I show that loosely structured inactive groups can meet this requirement if we employ a weak (but nonetheless non-reductionist) notion of collective agency. This notion can be defended on independent grounds. The resulting position on distribution of responsibility is more restrictive than Held’s, May’s or Tännsjö’s, and I find this consequence intuitively attractive.
Suppose that you are one amongst many people who face a certain question; that you are all equally intelligent, equally informed and equally impartial; that you have each formed an answer to the question without deference to others; that you differ from most others in the answer you give; and that you are aware that those things are true. Should the bare testimonial evidence that you are wrong about the question on hand lead you to change the answer you give? The Condorcetian jury theorems suggest that it should. But this can’t be right, at least not for answers that are connected in the web of your belief with other positions you take. Revise this one answer only and you will hold a judgment that is not well connected with your other judgments. Apply the revisionary strategy more generally and you will have to face problems of inconsistency or path-dependency.
In this paper I present the model of 'bounded revision' that is based on two-dimensional revision functions taking as arguments pairs consisting of an input sentence and a reference sentence. The key idea is similar to the model of 'revision by comparison' investigated by Fermé and Rott (_Artificial Intelligence_ 157, 2004). In contrast to the latter, however, bounded revision satisfies the AGM axioms as well as the Darwiche-Pearl axioms. Two one-dimensional special cases are obtained by setting one argument of the two-dimensional revision operation to certain extremal values. Bounded revision thus fills the space between conservative revision (also known as natural revision) and moderate revision (also known as lexicographic revision). I argue that two-dimensional revision operations add decisively to the expressive power of qualitative approaches that refrain from assuming numbers as measures of degrees of belief.
People are prone to ascribe value to persons they love. However, the relation between love and value is far from straightforward. This is particularly evident given certain views on the nature of love. Love is here depicted as an attitude that takes non-fungible persons as intentional objects. Taking this view as a starting point, it is then showed why it is difficult to combine with certain views on value. The main challenge comes from the idea that value judgements are universalizable. This view squares badly with the thought that the people whom we love are irreplaceable. Introducing the idea that properties may have different functions in the intentional content of the attitude, this paper determines what precisely it is about love that makes it hard to combine with universalizability. Moreover, it suggests two ways of meeting this challenge.
Adam Grove showed how David Lewis’s sphere systems can be used to model AGM, but it was Sten Lindström and Wlodek Rabinowicz who provided a philosophically interesting interpretation of Grove’s modeling: spheres may be thought of as representing theories on which the doxastic agent can fall back. In this paper we consider the possibility for the agent to go the other way: to push on.
The 'buck-passing account' of goodness, as T. M. Scanlon dubbed it, is by now both familiar and much controverted. Saying that a thing is good, according to the buck-passer, is saying no more than that some unspecified facts constitute sufficient reason for some unspecified pro-act or attitude towards it. Wlodek Rabinowicz and Toni Rønnow-Rasmussen have presented objections to this account with clarity and fair-mindedness, objections to which I shall respond in section 3. But I begin with some stage-setting remarks about normativity and reasons in section 1, and then consider how to formulate the buck-passing account in section 2.
There is evidence that the descriptivity of simple ‘ought’-judgments comes to substantially more than that they are universalizable. Grammatical and logical evidence that includes the matter of ‘Frege-Geach problems’ argues for this, as does evidence that their practicality or ‘prescriptivity’ is not exactly that of their corresponding ‘commands’. Hare had nearly, in the ‘archangelic agreement theorem’ of Moral Thinking, an accommodation for this evidence. He was in a position to say that corresponding to an 'ought'-judgment is a certain descriptive proposition that states the objective of the person making that judgment, and that moral judgments can be taken as conjunctions prescriptive of this objective. It is explained how revisions of Universal Prescriptivism along these lines can be comfortable with that otherwise troublesome evidence.
According to Parrondo’s Paradox, there will be cases where a subject facing two probabilistically losing strategies has a probabilistically winning strategy by combining these losing strategies in a random way. Conversely, the subject can have two winning strategies, that when combined result in a probabilistically losing strategy. This unexpected result has found applications in economics, biology and electronic engineering, to name a few cases. It is argued that this result should have some applicability in epistemology as well.
In his recent book, Teleological Realism, Scott Sehon defends a teleological account of explanations in common sense psychology [CSP], arguing that if such explanations were causal, CSP would be reducible to physical science. He asserts that since it is not thus reducible, its success in explaining human behavior is a mystery. I contend that many CSP explanations are causal, although in a different sense than the causal explanations of physical science. I set out the distinctive features of CSP, object to the physicalist claim that explanations in physical science are the basic type, and argue that CSP explanations do not need external support from physical science and that reflection on how they work dispels any mystery about their success.
Michael Smith raises in his fetishist argument an important question: what is the content of the motivational states that account for moral motivation? Although the argument has been widely discussed, this question has not received the attention it deserves. In the present paper, I am not particularly concerned with the fetishist argument as such, but use it as a point of departure for a discussion of how externalism can account for moral motivation. More precisely, I investigate various accounts of moral motivation and explain how externalists can employ them in order to answer this question.
A number of philosophers have recently argued that (i) consciousness properties are identical with some set of physical or functional properties and that (ii) we can explain away the frequently felt puzzlement about this claim as a delusion or confusion generated by our different ways of apprehending or thinking about consciousness. In David Papineau's version of this view, our fundamental delusion is an "intuition of mind-brain distinctness" generated by the difference between our phenomenal and material concepts of consciousness. I argue that Papineau's account is incorrect. To begin with, it is arguable that we are mystified about physicalism even when the account predicts that we shouldn't be. Further, and worse, the account seems to predict that an intuition of distinctness will arise in cases where it patently does not. I conclude by considering what lessons we can, and can't, draw about the mystery of consciousness from this.
Peter Singer has been skeptical towards the idea of reflective equilibrium since the 70s, and thinks that certain recent empirical research about moral intuitions, performed by Joshua Green at Princeton University, provides support for this skepticism. Green and his colleagues used modern brain imaging techniques to explore what went on in people’s brains when they were contemplating certain practical dilemmas. The aim of this essay is to see if one can squeeze out any skeptical implications from their results. The main conclusion is that these results do indeed provide material for a skeptical challenge, but that this is a challenge not specifically for the idea of reflective equilibrium, but for the possibility of rational argumentation in ethics in general, including Singer’s own attempts to justify his moral convictions.
Contrary from what seems to be the received wisdom in political theory, there is no way that we can affect future people, at least not people who live several generations later than we do and who are not taken care of by our standard concern for our close descendants. This is good news for the all affected principle. The fact that we cannot involve future people in our present decision-making does not mean, then, that we have to depart from the requirements of the all affected principle. And this principle is natural to adopt if we conceive of democracy as a way of aggregating interests. So the principle remains a live option in that sphere of democratic theory. However, its role is limited by the fact that we have to make moral decisions about the future. Here an epistemic notion of democracy is more to the point than the idea of a method of aggregating interests.
Where there are infinitely many possible basic states of the world, a standard probability function must assign zero probability to each state—since any finite probability would sum to over one. This generates problems for any decision theory that appeals to expected utility or related notions. For it leads to the view that a situation in which one wins a million dollars if any of a thousand of the equally probable states is realized has an expected value of zero (since each such state has probability zero). But such a situation dominates the situation in which one wins nothing no matter what (which also has an expected value of zero), and so surely is more desirable. I formulate and defend some principles for evaluating options where standard probability functions cannot strictly represent probability—and in particular for where there is an infinitely spread, uniform distribution of probability. The principles appeal to standard probability functions, but overcome at least some of their limitations in such cases.
Pragmatic foundationalism is the view that success is both necessary and sufficient for the rational acceptability of a procedure of choice. This essay investigates the plausibility of this claim in the context of decision-making over time against the background of three different standards of success. It argues, first, that success is not sufficient to accept a procedure of choice. Secondly, that success is not necessary since cases could be constructed where there is no clear, unambiguous notion of pragmatic success available yet a rational course of action is open to the agent. It depends on the complete description of the situation what is a rationally superior choice procedure. Therefore, success does not determine the rational procedure of choice. However, this does not mean that pragmatic considerations are altogether irrelevant. The essay concludes with some remarks about the proper role of success in the justification of a choice procedure.
In order to consider whether Wittgenstein's strategy regarding scepticism succeeds or fails, I will examine his approach to certainty. To this end, I will establish a comparison between different uses of language as mentioned in On Certainty and his distinction between meaningful, senseless, and nonsense statements in the Tractatus. This comparison has three advantages: first, it allows us to clarify the role of the so-called special propositions in On Certainty; second, it illuminates the relationship between some features of special propositions in On Certainty and the characteristics that define senseless statements in the Tractatus; and, finally, it shows the status of the so-called insight-ful nonsenses in the Tractatus. As a consequence of this argument, I believe in a halfway house between the so-called traditional and new interpretations of Tractatus.
This paper considers three general views about the nature of moral obligation and three particular answers (with which these views are typically associated) concerning the following question: if on Monday you lend me a book that I promise to return to you by Friday, what precisely is my obligation to you and what constitutes its fulfillment? The example is borrowed from W.D. Ross, who in The Right and the Good proposed what he called the Objective View of obligation, from which he inferred what is here called the First Answer to the question. In Foundations of Ethics Ross repudiated the Objective View in favor of the Subjective View, from which he inferred a Second Answer. In this paper the Objective and Subjective Views and the First and Second Answers are each rejected in favor of the Prospective View and a Third Answer. The implications of the Prospective View for another question closely related to the original question are then investigated: what precisely is your right regarding my returning the book and what constitutes its satisfaction?