Research Topics

Applicants are strongly encouraged to submit their own innovative proposals. The topic of the proposed research lies preferably at the intersection between established fields or debates on normativity. The following proposals merely serve to give a general idea of possible topics. It is of course not a problem if one's proposal aligns with one of these projects, but applicants should feel free to submit novel proposals.


Natural Functions and Social Norms: Strengths and Limits of the 'Missing Mechanism' Argument

Functional explanations are teleological: the explanans is the purpose to which the explanantia contribute. Classical functional explanations in Social Science usually explain social phenomena with the purpose of maintaining and preserving society. It is argued that social norms and structures ensure social organization and can thus be understood in close analogy to the role of organs in biological organisms.

Since its heyday in the 1960ies, functionalism in social science has come under increasing pressure. Teleological explanations tend to be accepted only insofar as they conform to intentionalism, i.e. where the purpose in question – the preservation of society – is either an intentional agent’s goal, an implication thereof, or an “invisible hand”-effect in terms of a non-intended consequence of intentional actions (for a classical statement, cf. Elster 1982). A particularly important role in the decline of functionalism plays what has come to be called (cf. Pettit 1996) the “missing mechanism”- argument against non-intentional functional explanations in social science. The argument is that functional explanations in biology are valid because biological evolution has a mechanism of variation and selection, while no such mechanism can be found at the level of societies, social structures, and norms.

This part of the project examines the “missing mechanism”-argument in one of the following directions:

  • Are all classical functionalist theories of social norms really affected by the “missing mechanism”-problem? E.g., does the force of this argument also extend to the residual functionalist conception of social norms that is sometimes endorsed even in recent research in social ontology (e.g., Epstein 2015), Robert K. Merton’s latent functions?
  • More recent attempts to apply the evolutionary model of natural functions to social phenomena include group selection theory (e.g., Sober/Wilson 1998) and Memetics (e.g., Dennett 1993). These views have led a life at the fringes of social science, and most philosophical accounts have tended to be rather critical. What is the view on social norms emerging from these theories?


Analysing Paradigms of Moral Reasoning in Animal Ethics

The dominant theoretical framework to reason for animals’ moral standing is moral individualism. According to Rachels, who coined the term in the context of animal ethics, moral individualists argue for the moral status of animals on the basis of individual capacities: “The basic idea [of moral individualism] is that how an individual may be treated is to be determined, not by considering his group memberships, but by considering his own particular characteristics.” (Rachels 1990, 173) Capacities such as the ability so suffer and experience pain become important aspects of moral reasoning in this framework. This idea provides the basis for early animal ethicists like Singer (1979), Regan (1983) or Rollin (1989) and a majority of thinkers in animal ethics today. They share the idea that the moral community can be extended to animals on basis of their individual abilities.

Already in 1978, a thorough critique of the individualistic accounts in animal ethics was formulated (Diamond 1978). As a consequence, the question emerged whether alternative theoretical frameworks can provide arguments for the moral standing of animals. In the recent past the debate has seen a revival (Crary 2012, May 2014, Wolfe 2010, Aigner/Grimm 2016). In this debate relationalism has been seen as the main opponent of individualism. For instance, Todd May (2014) distinguishes between capacity based reasons (CBR) and relational based reasons (RBR) as the two sources of animals’ moral standing.

Against this background, the following questions will be addressed in this PhD-project:

  • What are the presuppositions of the individualistic and relational framework and which idea of morality do they mirror?
  • Are the two theoretical frameworks necessarily understood as contradicting each other? E.g. Clare Palmer (2010) argues that negative duties toward animals can be justified within an individualistic framework and positive duties within a relational account.
  • Are the individualistic and relational paradigms comprehensive? As argued elsewhere (Grimm/Aigner 2016), e.g. Wittgensteinian accounts go beyond the dichotomy of CBR and RBR formulated by May (2014).

By answering these research questions, light will be shed on dominant and often unquestioned narratives on the sources of normativity in animal ethics.


Normative Thought and Normative Language

Normative thought and normative language give rise to a range of important meta-normative debates about the status of normative judgments and normative statements. The traditional battle-ground for these debates concerns moral thought and language, but analogous issues can arise with other forms of normative thought and speech, for example concerning rational, aesthetic, epistemic, or legal norms. One classic question is that of cognitivism: are normative judgments (such as the judgment that stealing is bad) cognitive states, e.g. beliefs? Or are they non-cognitive, or desire-like states of the mind? Another related question is whether normative judgments are capable of being true or false— here the debate touches upon discussions surrounding the nature of truth. Yet another point of debate concerns the correct account of the meaning of normative sentences, and relatedly the nature of the speech-acts one performs by using such sentences (e.g. the sentence “Stealing is bad.”).

It is this last issue that—curiously—has played a key role in the justification of cognitivism and realism in metaethics. Many have argued that moral expressivism, the view that it is the semantic function of moral sentences to express non-cognitive attitudes, founders because of the so-called “Frege-Geach Problem”. Thus a very specific problem concerning the meaning of moral expressions has been taken to force us to adopt a cognitivist view of moral judgments and correspondingly moral realism.

Recent years have seen renewed interest in the viability of expressivism, and in ways of addressing the Frege-Geach problem (see e.g. the work of Gibbard and Schroeder). This revived interest concerns not only moral or otherwise clearly normative language. Expressivism has also been explored, for example, concerning conditionals (e.g. Edgington), epistemic modals (e.g. Yalcin, Schnieder) or probability-ascribing sentences (e.g. Price). As a result of this recent work, it is no longer regarded as orthodoxy that the Frege-Geach Problem represents a decisive reason against expressivism and therefore non-cognitivism.

Another recent phenomenon is the development of many forms of “hybrid” expressivism”, i.e. theories that postulate an expressive as well as a descriptive component of meaning (e.g. Horgan and Timmons). These theories might benefit from recent work on other phenomena that suggest that ther are non-truth-conditional aspects of meaning (e.g. by Kaplan, Predelli, Gutzmann). The spectrum of new solutions now available to normative non-cognitivists is not restricted to expressivism: the recent debate about relativism and contextualism has led to a much more detailed understanding of the semanticist’s options. In fact, Gibbard, a sophisticated expressivist, can be seen as proposing a theory of the contents of normative judgment that belongs to the relativist family. Finlay’s recent account of normative language treats normative language as a special case of modality and consequently employs Kratzer’s semantics of modals. It could be argued that his account is a version of contextualism.

There are many questions in this area that a PhD-thesis might address. To give just a few examples:

  • What exactly does the Frege-Geach problem show about normative language?
  • Are there suitable semantic frameworks for modelling normative language?
  • Are there notions of representational content that permit an adequate modelling of the content of normative thought and language?
  • Is Expressivism the only option for normative non-cognitivists?


Joint Actions and Normative Expectations

Many normative phenomena are situated in the context of coordination, cooperation, and communication between agents. Recent research on this topic has tended to highlight that coordination, cooperation, and communication should be understood as special cases of joint action, i.e. as something agents do together. One of the controversial issues in recent general accounts of joint action concerns the kind of reciprocal attitudes between participants in joint action. Some philosophers have claimed that for an agent to participate in a joint action, he or she has to take a cognitive stance towards the others: for him or her to intend to do his or her part as his or her part, he or she has to believe, suppose, or assume that the others will do their parts too. In this view, the individual participants in joint action cognitively represent (with a mind-to-world direction of fit) their partners as doing their parts. There are a great number of variations on this view, depending on whether the cognitive attitude in question is taken to be one of belief (Tuomela, Bratman), presupposition (Searle), (cognitive) acceptance or reliance (Alonso 2006; cf. also Schmid 2013). One of the arguments sometimes presented for this view is the widely accepted claim that agents cannot intend to do what they take to be impossible to perform. If this is true, and if participants in joint action know that their contribution will constitute a part of a joint action only if the others perform theirs, it seems plausible to assume that the intention to play a part in a joint action involves some sense that the other participants' contributions will be performed, or are at least not unlikely to be performed.

Normativist conceptions of collective intentionality and joint action tend to take a different approach. In such views, a participant in a joint action doesn’t merely believe or predict that the other participants will do, or are likely to do their parts. Rather, the attitude taken towards the other participants is of a normative kind. The joint intention is taken to furnish the participants with normative reasons to make their contribution and to expect the others to make theirs. The expectation at work between the participants is thus normative rather than cognitive. Normative expectations differ from cognitive expectation in their direction of fit (world-to-mind rather than mind-to-world), in their limited scope (normative expectations are addressed at persons) and in their immunity to disconfirmation (it is no rational requirement to drop a normative expectation just because it has not been met).

Both these approaches to collective intentionality have their strengths and weaknesses and, even though the two approaches seem to be quite antithetic in spirit, it seems promising to examine possibilities of combining them. To this end, there appears much to be learned from the philosophy of the emotions. Emotions (affective intentional states) typically combine cognitive and normative elements. In particular the literature on trust, conceived as an affective attitude (Jones 1996), looks to provide a rich array of material with considerable potential for the analysis of the basic structure of collective intentionality (cf. Hollis 1998). A hypothesis that seems congenial to parts of recent philosophy of trust (e.g. Pettit 1995; 2004) is that cognitive and normative expectation are both active in the kind of interpersonal trust at work among the participants in joint action. This involves the participants cognitively representing each other as providing each other, through that very representation, a further normative and motivating reason to perform their parts. This sub-project engages with the philosophy of trust, bringing the results to bear on the question of the relation between cognitive and normative expectation in joint action.


The Normativity of Logic and Logical Pluralism

This part of the project will focus on the philosophy of logic. The central question addressed here is in what sense one can say that logic is normative. Logic is usually defined today as the study of valid inferences. Understood in this way, it is clearly prescriptive in character and usually taken to be connected to certain epistemic norms on how human reasoning, whether scientific and informal, ought to look like. For instance, we usually take logical consistency to be a central condition for successful reasoning. The view that logic is normative in character has a long philosophical history. For instance, Frege, one of the fathers of modern quantificational logic, convincingly argued that logic is a normative science, similar in character to ethics. Thus, according to him, a law of logic such as the law of excluded middle should effectively be viewed as a law of rational thought.

The dissertation project will build on recent work on the history and philosophy of logic (in work by Hartry Field, Florian Steinberger, Gilbert Harman, among others) and further investigate the particular normative nature of logic. Research on the project will focus on two related issues. The first topic concerns the debate between proponents of logical monism and of logical pluralism. Monists argue that there is one correct or core logic employed in human reasoning. Pluralists such as Greg Restall and Stewart Shapiro, one other hand, deny this. They hold that there are in fact different and equally acceptable notions of logical consequence which can be applied in different theoretical contexts. Logical pluralism thus presents a kind of relativism about logic or logical frameworks. The dissertation will give a first systematic survey over these debates and of the different approaches to this form of logical relativism. Based on this, it will analyze the implications of logical pluralism for the view that logical laws are normative in character.

The second topic to be studied in the dissertation concerns the question in what sense logic is normative. A general idea, recently formulated in work by Steinberger, that will be analyzed here is that logic provides constitutive (as opposed to regulative) norms for reasoning. Work on the precise nature of logical norms will connect the philosophy of logic with debates in other philosophical fields. This concerns, in particular, John Broome’s work on the normativity or reasoning, but also discussions of the nature of norms in moral philosophy as well as of epistemic norms in epistemology.


The Normativity of Mathematical and Religious Propositions

At first glance, mathematical and religious propositions seem to be located on diametrically opposed ends of the spectrum of certainty: the former being considered as the very paradigm of certainty, the latter as unsure and arguable. In the philosophy of Ludwig Wittgenstein, however, these two sorts of propositions, seen from an epistemological point of view, tend to converge. Both can be said to correspond to the type of “hinge propositions” – propositions large parts of our language and beliefs rest on (cf. Kusch 2016) - and to belong to the grammar of language. In his Remarks on the Foundations of Mathematics Wittgenstein says: “The connexion which is not supposed to be a causal, experiential one, but much stricter and harder, so rigid even, that the one thing somehow already is the other, is always a connexion in grammar.” (Wittgenstein 1978, I, § 128); and he ponders over the idea of “theology as grammar” (Wittgenstein 1953, § 373). That means that both sorts of propositions are normative (cf. Ramharter/Weiberg 2006, p. 69-71). For mathematical propositions Wittgenstein renders this explicit when he says: “[W]e see as a norm the procedure that 3 things and 2 things make 5 things.”(Wittgenstein 1978, VI, § 9, p. 311), or: “Mathematics forms a network of norms.” (Wittgenstein 1978, VII, § 67, p. 431). For religious propositions, however, it is implicit in Wittgenstein’s considerations in his Lectures on Religious Belief, where he states (i.a.) that “Those people who had faith didn’t apply the doubt which would ordinarily apply to any historical propositions” (Wittgenstein 1966, p. 57).

What thus seemed to be very different at first sight becomes very close from a certain point of view. For both mathematical and religious propositions their normative and foundational status is central, if not to say characteristic. The research question to be answered should therefore be: Are significant differences between mathematical and religious propositions due to differences in types of normativity to be found in the two fields? Or can their differences better be explained as stemming from independent grounds, thus leaving the concept of normativity univocal? And what can be learned from the comparison about the relationships of the concepts of normativity, lawlike necessity, and certainty?


The Normativity of Algorithms

Usually algorithms are not seen as having anything to do with normativity; ethics and politics are seen as domains separate from the technical. This project questions this assumption; drawing on, and contributing to, normative theories and theories from philosophy of technology, it investigates what it could mean to say that algorithms are “normative”.

We tend to make a distinction between, on the one hand, “human” domains that have to do with value and power, and, on the other hand, “technical” or scientific domains that include computing and coding. Hence traditional normative theory is focused on human agents. However, this distinction is not very helpful when it comes to evaluating today’s algorithms, which often take on the role of “artificial agents” and do much more than previously machines could do. Consider for instance social media algorithms that select news, algorithms in the financial world that trade, or algorithms that hire people or even write journalistic pieces. This problematic assumption is reflected in various normative theories ranging from moral theories such as deontology and consequentialism to political theories such as critical theory (Marx, Foucault), feminism, and so on. The focus is always on the human and technology is seen as purely instrumental; the normativity of technology remains out of sight.

In philosophy of technology non-instrumental, more normative conceptions of technology have been developed which could be used to not only re-think the normativity of algorithms but also to re-think normative theories that are mainly focused on the human. There has been some preliminary work on algorithms and normativity, for instance on algorithms and power (Lash 2007) and some authors have argued that algorithms embody ideology (e.g. Beer 2009, Danahar 2016). Yet, there has been little attention to the precise normative roles and dimensions of algorithms, and little integration has been done between various theories inside and outside philosophy of technology that say something about the ethics and politics of algorithms.

This PhD project aims to investigate what it could mean to say that algorithms are “normative” and how we may adapt existing normative theories and theories in philosophy of technology in order to account for the normative role and dimension of algorithms. The project will draw on, and critically discuss, a number of approaches both in normative theory and in philosophy of technology:

  • Normative ethical theories, including deontology, consequentialism, and virtue ethics.
  • Political theories and theories about power, in particular Marx, Foucault (e.g. Foucault 1982), and feminist theory.
  • Approaches in philosophy of technology that have questioned the non-instrumentality of technology and have pointed to its normative role, in particular approaches inspired by critical theory (Feenberg 1991, Winner 1986), by phenomenology (e.g. Verbeek 2005), and by philosophy of information (e.g. Floridi and Sanders 2004).
  • Recent literature on the ethics of algorithms (e.g. Kraemer et al. 2011, Ananny 2016, Beer 2016).

However, the project will not only be limited to literature study and conceptual work in the sense of discussions of theory. It will also look at particular cases. The candidate may choose from cases such as: algorithms in social media, biased hiring algorithms, financial algorithms, writing algorithms.


Normativity in Amoral Scenarios

Moral rules and values are not only part of our daily life; instead, they have an impact on scenarios which are by definition “amoral”. In ethics, war in particular has been regarded as a typical amoral scenario. Despite this diagnosis, moral philosophers and political philosophers have been engaged in elaborating rules that do not only aim at containing war violence, but also at justifying normative rules of “fair fight” (Bellamy, 2006). Recently, the debate on morality in amoral scenarios has been spurred by authors who defend a different approach. They argue that even though there are distinct amoral settings, as e.g. war, moral philosophy nevertheless needs to explore the “deep morality” of these scenarios (Frowe, 2011; McMahan, 2009).

This project explores aspects of this new research line, with a special emphasis on moral categories that have been studied and elaborated in that context. In particular, the project needs to indulge also in historic examples of warfare and of the experiences of soldiers who have committed acts of killing and fighting. It then explores the moral categories of “accountability”, “blame”, and “culpability”. At the heart of this investigation is the special status of “killing in war” as an act of self-defense (McMahan, 2009). Different from the general idea that self-defense is not only exempt from blame, but also justified by a “domestic analogy” which exempts nations from blame when fighting in self-defense (Walzer, 2006), the debate in war ethics now focuses on fine-grained distinctions that transcend this general presumption (Kamm, 2011). Presupposed, war ethics contributes to interpreting normativity in amoral scenarios anew, this investigation also sheds new light on moral rules and the justification thereof more generally.


Normativity in Public Policy: Principles of Nudging

Nudging, is "... any aspect of the choice architecture that alters people's behavior in a predictable way without forbidding any options or significantly changing their economic incentives” (Thaler and Sunstein 2009). Nudging operates on the premises that people are not purely rational economic optimizers when making decisions. Instead the environment can be reshaped such that people’s actions better reflect their underlying real desires, and that the government, in some respects, knows better what is good for the individual. Yet, different from moral prescriptions, nudging takes an indirect route in supporting already existing desires and by proposing institutional settings that encourage the resonating behavior.

In this project, choice architecture is supposed to complement the already existing structure of legislature and incentives. It explores the methods of nudging in the context of health institutions in a critical way. The notions of transparency and public scrutiny are needed in order to prevent manipulation and coercion, and to guarantee respect for a person’s autonomy in order to avoid “unethical nudges”. The analysis of choice architecture aims at rendering explicit the definitions, underlying premises, and practices. It will be applied to a concrete setting in health policy.


Normativity and Psychiatry

When someone is diagnosed with a severe psychiatric condition, such as “major depression” or “schizophrenia”, it is generally accepted that something has “gone wrong”, that things are “not as they should be”. However, what is not so clear is which norms psychiatric illnesses deviate from. One might appeal to biological norms, but there is debate over what these consist of and whether a human being must depart from them in some way in order to be properly regarded as ill. In addition, people with psychiatric illness diagnoses sometimes fail to conform to various social and cultural norms, such as norms of etiquette. Certain behaviours may also be judged as morally unacceptable, thus raising the question of whether and how “mad” is to be distinguished from “bad”. Furthermore, there is a phenomenological dimension to normativity that has been neglected. Most people with severe psychiatric illness diagnoses feel that something is wrong. For example, the world as a whole may appear somehow strange, not right. It is not at all clear what this sense of wrongness consists of or what form of normativity it involves. To further complicate matters, psychiatrists themselves are subject to various different norms in their interactions with patients, and critiques of orthodox psychiatry often single out the attitudes of clinicians as inappropriate in one or another way.

Hence psychiatry is fertile ground for a philosophical study of what the various kinds of norm consist of, how they operate, and how they interact. Given the diversity of norms that are implicated in psychiatric illness, such a study will need to address work in some or all of the following areas: moral, social and political philosophy; philosophy of mind and psychology; philosophy of medicine and psychiatry; philosophy of biology; phenomenology. In the process, it will also need to distinguish the descriptive question of how norms do operate from the normative question of how they should operate.

The proposed project thus presents us with an opportunity to build bridges between different areas of philosophy, which have tended to address the topic of normativity in relative isolation from each other. In so doing, it will also enhance our philosophical understanding of psychiatric illness and its wider moral, social, political, and cultural context.


Normative Requirements of Group Agency

The fascinating question in which way aggregative or corporate groups can be seen as agents has recently gained much attention in the philosophical literature. However, research on group agency has mainly focused on the metaphysical status of such entities, foremost whether group agents do have standing over and above the attitudes of the individuals that make up the group and, consequently, whether we can and should ascribe intentional states and attitudes to group agents. Though the current literature on the topic is impressive, it is remarkable that little scholarly work has yet been done on the normative constitution of group agents.

The dissertation project aims to make a significant contribution to filling this lacuna. The main focus will be on exploring the rational and moral norms that constitute and guide group agents. Current approaches to group agency have often started from philosophical accounts of joint intentions and shared activities of individuals. Scaling up from the joint actions of a small number of individuals to group agents ignores, however, that there is a constitutive difference between a small number of persons acting together (dancing a tango, walking together, painting a house) and organised group agents. The norms that constitute small-scale groups are different from the constitutive norms of corporate or institutional group agents. Moreover, ascribing mental states like intentions to group agents is highly controversial. Some philosophers have tried to resolve this problem by claiming that the working of group agents is best understood if we treat them as if they were intentional agents. But why, one might ask, should we conceive of group agents as entities with a fictitious mental life if we are interested in explaining their actual workings and outcomes.

Skepticism towards ascribing to group agents an independent mental life motivated several philosophers to start from a functional understanding of agency. According to functionalists, agency is merely a process of generating outcomes. A functionalist account of agency avoids metaphysically dubious assumptions, but fails to take seriously the normative dimension. Some functionalists have tried to ascribe attitudes to group agents by claiming that the attitudes of group agents supervene on the attitudes of the individuals that make up the group. However, more needs to be said about this relationship. A core hypothesis of the dissertation project is those individual attitudes on which the group agent’s attitudes ought to supervene can only be specified in light of the constitutive principles and particular normative framework of the group agent.

The overall purpose of the dissertation project will be to provide a detailed philosophical investigation of the normative foundations of group agency and to work out the rational and moral norms that constitute and guide group agents.