Friday, September 16, 2011

Cognizione Morale??




Published on Saturno, IL FATTO, July 2011.



Ironia della sorte, il professor Marc Hauser, massimo esperto di cognizione morale, è stato costretto a dimettersi dall’università di Harvard per aver falsificato i dati di almeno otto esperimenti pubblicati in prestigiose riviste accademiche. Conosciuto in Italia come autore di best-seller di quel genere letterario che va per la maggiore, ossia la pop-science, Hauser è l’autore, tra l’altro, dell’acclamatissimo Menti Morali (Il Saggiatore, 2010) un volumetto che ci spiega che le nostre intuizioni sul bene e sul male sarebbero guidate da un sistema innato di regole simile a quello che guida l’apprendimento del linguaggio: una grammatica universale morale.

Il peccato originale delle odierne ricerche in psicologia morale è, a dire il vero, dei filosofi: una ventina d’anni fa, la filosofa Philippa Foot, lanciava un dilemma morale che divenne il rompicapo con cui si passavano le serate ai convegni, il dilemma del Trolley: un vagoncino senza conducente sta correndo all’impazzata sui binari. Nella sua direzione, ci sono cinque uomini che sono stati legati ai binari da un gruppo di filosofi sadici. State osservando la scena dalla cabina di comando della stazione e avete la possibilità di azionare lo scambio che farà deviare il vagone su un altro binario, al quale è legato un solo poveretto, vittima della stessa banda di squilibrati. L’azionate o no? Se sì, la vostra intuizione morale è di tipo utilitarista: meglio un morto solo che cinque. Se no, la vostra intuizione morale è più di tipo kantiano: gli imperativi morali, come non uccidere, non ammettono eccezioni.

Gli psicologi presero sul serio l’esperimento mentale e cominciarono a fare ricerche per vedere come ragionava la gente sul dilemma. La maggior parte delle persone si rivela utilitarista. Ma, in una variante dell’esperimento, in cui osservate il vagoncino da un ponte, e l’unico modo di fermarlo prima che schiacci le cinque persone è acchiappare il signore grasso vicino a voi e gettarlo giù dal ponte perché blocchi col suo peso la corsa del trolley, la gente risponde (che sorpresa!) che non se la sentirebbe di ammazzare uno sconosciuto per il bene di altri cinque sconosciuti.

Tra gli psicologi morali, c’è chi dice che le intuizioni morali siano guidate dal ragionamento, chi dalle emozioni e chi, come Hauser, da un sistema di regole innate che hanno la stessa astrazione delle regole che, secondo Chomsky, controllano le intuizioni linguistiche di grammaticalità. Un cattivo insomma, sarebbe un individuo “sgrammaticato”, che si comporta non rispettando le regole alle quali ci aspettiamo (innatamente) si conformi l’umanità intera. Hauser nasce come specialista della comunicazione animale: la sua tesi di dottorato, The Evolution of Communication, (MIT Press, 1998) mostrava che il linguaggio di molte specie segue regole grammaticali astratte. Niente emozione e niente ragione dunque: nella morale quel che conta è applicare le regole giuste. Giovane stella nascente di Harvard quindici anni fa, Hauser non nascondeva la sua ambizione e il suo desiderio sfrenato di successo, collaborando con pesi massimi del settore come Chomsky stesso, Antonio Damasio, etc. Ahimé, molti dei suoi esperimenti si sono rivelati impossibili da riprodurre. Scimmie che riconoscono regole astratte, o che si riconoscono allo specchio, esistono solo nel laboratorio dello sgrammaticato Hauser. Dalle stelle alle stalle, Hauser si è ritrovato con la FBI in ufficio che sequestrava video e documenti, e dopo un’inchiesta durata più di tre anni, ha deciso di dimettersi.

Forse bisognerebbe far tesoro del caso Hauser per interrogarsi, invece che sulla morale dei uistitì, su quella del gioco al massacro della scienza contemporanea, in cui la logica del successo, dei soldi, della celebrità, dell’invidia evoca più Wall Street dell’immagine dello scienziato disinteressato e preso solo dalla passione per la verità. Un po’ di grammatica morale, o semplicemente di buone maniere, avrebbe fatto bene a Hauser e a tutta la banda di paladini della verità che sognano la prima pagina del New York Times e i libri in classifica.

Thursday, September 15, 2011

La Biblioteca del Futuro



Published on SATURNO, IL FATTO, Sept. 9th, 2011. Ask permission to share alike.

COSA LEGGEREMO FRA cent’anni? Quali saranno i titoli che riempiranno gli scaffali delle biblioteche ormai virtuali dei nostri pronipoti, che s’impileranno in icone disordinate sui loro iPad, telefoni cellulari, schermi interattivi? Il mondo cambia in fretta, l’America, tra uragani climatici e finanziari, conta sempre meno: Cina, India, Brasile e Russia fanno soffiare un vento nuovo in politica e in economia. Il clima è instabile, la finanza governa la politica internazionale, il métissage culturale è ormai una banalità (tranne forse in Padania), donne e uomini hanno trovato nuovi modi di convivenza, la guerra si fa con i robot… Saranno ancora Nietzsche, Marx e Freud a fornire le chiavi di lettura delle rivoluzioni vertiginose in corso? Ciò che sembra fare più difetto oggi, nell’era della tecnica imperante, sono le visioni, gli schemi di pensiero, come se ci accanissimo su una realtà che non capiamo più con vecchie griglie d’interpretazione sempre più inappropriate.
«Saturno» ha deciso di guardare oltre i titoli del rientro letterario, e chiedere a intellettuali del mondo intero quali sono i libri che daranno senso alla realtà tra cento anni, una biblioteca di scienze umane immaginaria per capire un mondo che ci sfugge veloce di mano. L’idea, lanciata da Ariel Colonomos, scienziato politico, professore di relazioni internazionali a
“Sciences-Po” di Parigi, durante un convegno su Global Humanities, organizzato dall’Istituto Italiano di Cultura a New York, ha avuto un successo immediato. Sono in molti a pensare difatti che i libri sacri del futuro porteranno titoli molto diversi da quelli di oggi e verteranno su temi che ancora stentiamo a immaginare possibili. Così, lo stesso Colonomos, propone un libro su Confucio in Africa, dato l’immenso investimento delle imprese cinesi in Africa e le conseguenze sulle politiche del continente nero. Oppure: Perché l’Islam e l’Occidente si sono entrambi sbagliati: un’analisi islamista della Fatwa contro Salman Rushdie 30 anni dopo, libro immaginario, da scrivere dopo l’altrettanto immaginaria caduta dell’attuale regime in Iran e la vittoria dell’islam moderato. Il filosofo indiano Akeel Bilgrami (Columbia University) propone invece un ripensamento della filosofia della natura in chiave gandhiana: La Natura non è solo un corpo da sfruttare: l’idea di Natura contro l’idea di Risorse Naturali. Mentre lo storico della scienza Steven Shapin (Harvard University) si preoccupa del cibo del futuro: L’uomo è ancora ciò che mangia nell’era della cucina sintetica? La scrittrice Siri Hustved si interroga sull’identità del futuro con il titolo: L’Io e l’altro tra filosofia, neuroscienze e letteratura. Non bisogna però dimenticare che la Cina è sempre più vicina: così il politologo Pasquale Pasquino, Global Professor alla New York University ci propone come titolo: Il trionfo dell’Impero di Mezzo, nome della Cina in mandarino. C’è chi ancora crede che il progresso delle scienze umane verrà comunque dalla loro fusione con le scienze della natura. Così, l’antropologo iconoclasta Dan Sperber, vincitore l’anno scorso del primo premio Lévi-Strauss in Francia, propone sicuro: La svolta naturalistica nelle scienze sociali : 1980-2030. E qualcuno immagina subito un pamphlet correlato su Ormoni e protoni: saggio sulla fisica delle emozioni, da distribuire a un corso di “Endocrinologia e metafisica”… Lisa Ginzburg invece pensa alle relazioni personali nell’intimità di un futuro fatto di distanze, e ci propone come titoli: Né con te, né senza di te. Dalla convivenza alla coesistenza: saggio sull’evoluzione dei legami, e insieme, per imparare a essere cosmopoliti senza perdere l’identità: Controesodo e antidiaspora: nuovi radicamenti, o le rotte del distacco, perché nessuno nel futuro saprà più dove sono esattamente le proprie radici. David Berreby, saggista americano, autore dell’acclamato Us and Them: The Science of Identity (2008) si interroga su come penseremo la vita e la morte con il titolo Fragilità essenziali: breve saggio per permettere agli esseri umani di partecipare alle decisioni di vita e di morte. Invece Jean Birnbaum, direttore delle pagine culturali del quotidiano Le Monde, immagina una pastiche filosofico dal sapore decostruzionista dal titolo: Spettri di Marx. Scritti postumi di Jacques Derrida. «Saturno» non ha resistito a contribuire con qualche suggerimento semiserio. Così dalla redazione arrivano i seguenti titoli: Etica del break-down: identità personale e responsabilità nell’era degli psicofarmaci, oppure: Mediterraneità: come la rivoluzione dei gelsomini ci ha insegnato a condividere le sponde di una stessa cultura. E ancora: Stare insieme. Uscire dalla schizofrenia con tutte le personalità intatte; Robot in guerra: feriti cibernetici e arti fantasma elettronici. Nuove frontiere dell’informatica delle emozioni. E infine: Slow-cost. L’arte di viaggiare piano nell’era degli spostamenti istantanei. Il catalogo del futuro è aperto. Certo, ci vuole ottimismo a immaginare il futuro, e anche la consapevolezza che la tecno-scienza imperante non risolverà tutti i problemi: le vecchie scienze umane, stanche e cariche di interpretazioni del mondo, saranno ancora là, ad aiutarci a dare un senso a quel che succede, perché se i protagonisti della nostra storia, ossia noi stessi, non si riconoscono più nelle teorie che spiegano loro chi sono e cosa fanno, la storia dell’umanità, semplicemente, si ferma. Tutti i testi del convegno di New York su Global Humanities sono disponibili online su:

Wednesday, September 07, 2011

Epistemic Injustice and Epistemic Trust


Draft. Do not quote without permission. Submitted to the journal Social Epistemology, August 2011.


Miranda Fricker has introduced in the philosophical debate the insightful idea of epistemic injustice to address a series of asymmetries of power within the “credibility economy” of a testimonial exchange. In this paper, I will focus mainly on the first kind of epistemic injustice she discusses: Testimonial injustice. According to Fricker: “The basic idea is that a speaker suffers a testimonial injustice just if prejudice on the hearer’s part causes him to give the speaker less credibility than he would otherwise had given” (p.4). Although she acknowledges various sources of testimonial injustice, she concentrates her analysis on what she considers as the core case of this kind of epistemic injustice, that is, when identity prejudices bias the hearer’s acceptance of a speaker’s word. Here, I will challenge two main points of her account: (1) I will argue that the ways in which our credibility judgements are biased go far beyond the central case of identity prejudices and (2) that the weight we give to our prejudicial biases crucially depends on the stakes of the testimonial context.

1. The Moon Landing Case: Why do you believe it?

One of the most well known variants of conspiracy theories is that no man stepped on the Moon in 1969 and that the entire Apollo program - which realized six landings on the Moon between 1969 and 1972 and was shut down in 1975 – was a fake. The initiator of the conspiracy theory was Bill Kaysing, who worked as a librarian at the Rocketdyne company - where the Saturn V rocket engines were built – and published in 1974 at his own expenses the book: We never went to the Moon. America’s Thirty Billion Dollar Swindle[1]. After the publication, a sort of movement of sceptics grew and started to collect evidence about the alleged hoax. According to the Flat Earth Society, one of the groups who denies the facts, the landings were staged by Hollywood with the support of Walt Disney and under the artistic direction of Stanley Kubrick. Most of the “proofs” they provide are based on an accurate analysis of the pictures of the various landings. The shadows’ angles an are inconsistent with the light, the American flight blows even if there is no wind on the Moon, the tracks of the steps are too precise and well-preserved for a soil in which there is no moisture. Also, isn’t it suspicious that a program that involved more than 400,000 people for 6 years was shut down abruptly? And so on. The great majority of people we consider reasonable and accountable (myself included) usually dismiss these claims by laughing at the very absurdity of the hypothesis (although there have been serious and documented responses by the NASA against these allegiances). Yet, if I ask myself on which evidential basis I believe that there has been a Moon landing, I have to recognize that my evidence is quite poor, and I have never spent a second trying to debunk the counter-evidence cumulated by these people. What I know about the fact mixes confused child’s memories, black and white old TV news and deference to what my parents told me about the landing in the following years. Still, the cumulated evidence doesn’t make me hesitate about the truth of my beliefs on the matter. That is because my reasons to believe that the Moon landing has taken place go far beyond the evidence I can gather about this fact. I trust a democracy such as the United States to have certain standards of sincerity and accuracy, I have beliefs about the possible interests that US could have had in mounting a hoax (competition with USSR, display of technological superiority…), and don’t find them compelling to justify such a immensely costly operation. I can also reason about the very possibility of keeping the secret of the hoax for a program that involved more than 400 000 workers… Also, I have prejudices about a certain stereotype of information - conspiracy theories - that make me doubt about the credibility of their promulgators. In a sense, I am paying an epistemic injustice towards these people, judging them less credible not on the basis of my “epistemic superiority”, rather on the basis on an alleged superiority in values and beliefs about how a reasonable society conducts itself. My prejudices cause me to give to the conspiracy theorists less credibility that I would otherwise have given them.

Am I right or wrong in committing this injustice? And is my prejudice working just as a sort of social perception of this category of people that weakens my trustfulness in what they say, or is it a more complex attitude in which a mixture of stereotypes, societal beliefs, values, deference to various epistemic and moral authorities (my parents, my community of peers, the United States) and a bit of rational inference?

This example illustrates the two main points I want to address here. The amount of trust we allocate to our interlocutors depends on many factors: a complex of judgements, heuristics, biased social perceptions and previous commitments we rarely take the time to unpack when we face the decision to accept or reject a piece of information. The accuracy in pondering these factors and the epistemic vigilance[2] we invest in monitoring the information that comes from others depend on various orders of stakes that can raise or lower according to the context of the exchange.

Take another example, that in Fricker’s perspective, could be cast as a case of a prejudicial dysfunction that creates a credibility excess. I read on a pack of cigarettes: Smoking kills. I am inclined to believe that it is true. On what basis do I believe this? I often start my classes in social epistemology by gathering reasons students have to believe in it. I have come up with three main sets of reasons students systematically give: (1) It is true that smoking kills because I know people who smoked and were seriously ill. (2) It is true because I, myself, was a smoker and my health was much worse. (3) It is true because there exists scientific evidence that has established the fact. Of course, no one of these three classes of “reasons” is an appropriate candidate to justify my belief that smoking kills. (1) is based on a spurious sample of people (people whom I know) that is statistically irrelevant., (2) is based on introspection and (3) is based on a blind deference to scientific authority: usually, if I push my students further, they confess they have never read scientific articles on the subject matter, but just make the hypothesis that they must exist in order to justify the anti-smoke policy. Of course, if I push even further, I sometimes obtain a more satisfactory answer, like: printing a text on each pack of cigarettes means having access to a special privilege of communication, that is conceded in very special circumstances: it is obviously an obligation of the tobacco company to put it on each pack: no company would voluntary advertise its product by saying that it kills. In Europe, this is a legal obligation that has been introduced at the beginnings of the Nineties (1991 in France). The fact that the obligation is legally reinforced by the Ministry of Health, justifies the inference that, given that the Ministry of Health has the objective of preserving the health of the citizens of its country, the messages should be based on sound scientific evidence. So, it is our trust in the benevolence of our institutions, in this case the various European Ministries of Health, that makes us believe that smoking kills.

As the examples show, credibility deficits and credibility excesses are not just based on prejudices - even if there may be a prejudicial component in the credibility assessment – but on complex values, cognitive inferences and emotional commitments that are at the basis of our epistemic trust[3].

I define Epistemic trust as an attitude with two basic components: a default trust that is the minimal trust we need to allocate to our interlocutors in order for any act of communication to succeed, and a vigilant trust, that is, the complex of cognitive mechanisms, emotional dispositions, inherited norms, reputational cues we put at work while filtering the information we receive[4]. Let me try to make this distinction clearer. I do not see the relation between default trust and vigilant trust as an opposition between a Reidean (non-reductionist) attitude towards testimonial information and a Humean (reductionist) attitude. Default trust and vigilant trust are deeply related: in most epistemic situations we do not choose to trust: we just don’t have the choice. Thus, a default trustful attitude towards communicated information is possible insofar there exist cognitive mechanisms, emotional dispositions, inherited norms, etc., that make us epistemically vigilant. In a sense, we may say that we “trust our vigilance”: that is why we can take the risk of a default trustful attitude: because we have developed (at various levels: evolutionary, cultural, institutional, etc.) various heuristics of epistemic vigilance. Default trust and vigilant trust are deeply related in many interesting ways through more of less reliable strategies[5]. We do not constantly check these strategies, and trust ourselves in relying on robust enough heuristics. Sometimes we are right and sometimes we are wrong. We take the responsibility to check the reliability of our epistemic vigilance when the stakes are high. For example, in normal situations, the stakes related to the Moon landing for my life are quite limited. But if I were an American citizen and were called to vote on the possibility of re-opening the Apollo program, which would imply a huge public investment of money, then I’d be probably more vigilant about the veracity of the facts and the feasibility of the program.

Sometimes we can raise our vigilance by a closer inspection of the data, sometimes by interrogating ourselves about the sources of our trust or mistrust, and sometimes by refining our cognitive heuristics. Take another example. In the Oscar-awarded movie Slumdog Millionnaire, Jamal is a poor guy from the slums of Bombay who, for a series of fortuitous events, ends up participating into a popular TV show: Who Wants to Be a Millionnaire? This gives him the chance, if he answers correctly to the last question, to earn 20 millions rupees, an enormous amount of money in India, equivalent to almost half a million dollars.

The last question is about baseball: Who was the best baseball champion of all times? Although Jamal had answered correctly to all the previous questions because they were somewhat oddly related to his past, he doesn’t have a clue for this last question. The answer’s options are four: A, B, C, D. He can use a “joker” and reduce the choice of answers from four to two. He does it and ends up with two possible answers: B and D. The suspense is really high: he can either earn 20 millions rupees or loose all money he won until that moment. The whole country is watching him. Before his final answer, a commercial break is scheduled. The presenter, a TV star, goes to the toilets. Jamal is already there. He writes the letter B on the mirror of the toilets and leaves. Jamal sees the letter. Is B the right answer? Should he trust the presenter? When they come back and the presenter asks him to choose between B or D, he says, convinced: “D”. “Are you sure?” asks the presenter. “Is it your last word?” Yes, he is sure. And he is right. The answer is D, and now he is a millionaire. Why didn’t he trust the presenter? He didn’t have any cues about the answer, and the presenter knew it. Wouldn’t have been more reasonable to trust him? Whatever reasons he may have had for mistrust him, he has surely reasoned a lot before answering. That is to show that he has used some heuristics in order to be vigilant in a situation in which he had no information. His heuristics go far beyond mere prejudice, even if they can contain some prejudicial element. For example, a possible interpretation of his choice is the following. During the commercial break, when the two meet at the toilet, the presenter tells him that he also was a guy from the slums and became rich and famous. Maybe Jamal’s heuristics is something like “Do not trust people like me” or “Do not trust people that come from the slums”. In a sense, there is a component of prejudice in his choice: people from the slums are unreliable, they would do whatever they can to mislead you, etc. etc., but that is not the whole story. The philosopher Alvin Goldman has suggested that his heuristics was perfectly rational: he just didn’t believe it could be in the interest of a TV presenter to make the TV spend 20 millions rupees[6]. But no matter why he has come to decide to answer: “D”, what is important for my point is that he reasoned about his choice and used various heuristics, prejudicial information, inferences and emotional cues to come up with what he decided to say.

To sum up: The common denominator of all these various examples is that we have to reason in different ways to come to trust or not what is said. These are examples of testimonial beliefs, things that we believe because we trust the source of information.

Still, we do not passively trust, rather, we use various inferential strategies to work out a vigilant attitude towards our informants and the content they inform us about. The central case of testimonial injustice, prejudicial injustice - that Fricker presents in her book- is among these inferential strategies. Although its importance in the credibility economy, especially of the face-to-face communication is undeniable, I consider it as part of a set of multiple strategies, whose reliability and justification varies in context, that we make us epistemically vigilant of the social world around us.

In the second part of this paper, I will try to detail some these strategies, and explore the conditions under which we make an epistemically responsible use of them[7]. When we are actively vigilant, when we are aware of the heuristics and biases we are using in order to filter information, we can take a responsible vigilant attitude, at least in two ways:

1. External Vigilance:

a. I take an epistemically vigilant attitude towards the information I accept trying to unpack the practices, principles, etc. I endorse in order to accept it (how I trust, which heuristics do I use, which norms)

b. I try to pry apart “valid” practices and heuristics from those that are based on psychological biases, internalized norms of authority and conformism, moral commitments and emotional reactions.

2. Internal Vigilance

a. I take a certain “distance” from the objects of my beliefs, and situate them within a genealogy and a social history of their emergence and impact in my cognitive life.

b. That is, being epistemic vigilant towards my own beliefs is a way of maintaining a “critical stance” on the reasons, the institutional biases, the social pressures that make concepts, biases and prejudices emerge and thrive in my way of thinking. If I take the time to ask myself: “Why people should trust what I say?” I take a responsible vigilant stance towards my epistemic practices.

Note that the dose of vigilant/default trust we activate in filtering what other people say depends on our own aims, as I hope it is shown by my previous examples. Most of the knowledge we happen to acquire through communication “falls” on us even if we don’t need it. As Samuel Johnson wrote: “If it rained knowledge I’d hold out my hand; but I would not give myself the trouble to go in quest of it” [8]. That is to say that we do not activate the same vigilance in each situation and are not requested to be responsible about our accepting or diffusing knowledge in the same way on each occasion. Even if I do not have a serious explanation on why I do believe that smoking kills, to the extent that my belief doesn’t have harmful consequences for others (like discriminative attitudes towards smokers and so on) I am entitled to believe it on trust without further inquiry. But if the stakes are higher in endorsing a belief, I should be vigilant about the mechanisms that make me endorse it and re-transmit it.

Now, what are these mechanisms? What does make us trust? I will detail here seven different sources of trust that we may monitor in ourselves and in others when we trust or present ourselves as a trustworthy source of information:

1. Inference on the subject’s reliability

2. Inference on the content’s reliability

3. Internalized social norms of complying to authority (“He is my master, thus I believe what he says…”)

4. Socially distributed reputational cues

5. Robust signals

6. Emotional reactions

7. Moral commitments

For each of these sources of trust, I’ll try to indicate some underlying social, cultural and cognitive mechanisms that sustain them. Let us start with the first.

1. Inferences on the subject’s reliability

Among the most common heuristics that make us often commit some kind of “epistemic injustice”, there are bundles of inferences we make on the subject’s reliability as a source of information. I think that Fricker’s central case of testimonial injustice, that is, prejudice, falls into this category. But it doesn't exhaust it. Here is a possible list of inferences on the subject’s reliability:

1.1. Contextual signs of reliability

We may infer the reliability of our informant by simply rapidly judging her better epistemic position: If I call my sister to know how is the weather forecast for Milan, the city where she lives, I trust her because I am able to infer her contextual reliability on the subject matter, given her better epistemic position. If I follow on Twitter someone who is based in Cairo and reports about the aftermaths of the revolution, I do it because I think he is in a better epistemic position than I am[9]. Another example of contextual sign of reliability can be having the appearances of a “local” if I am asking directions in a town that I do not know. Of course, I may be wrong about these signs: My sister can give me just a superficial information about the weather by looking out of the window, while by searching with Google I can have a much more detailed forecast, the guy in Cairo can be strongly ideologically biased and send on Twitter only information that contributes to a certain distorted vision of the situation, and my local informant can be as foreigner as I am, but just “look” as a local. But in all these cases, it seems at least prima facie reasonable to make these inferences.

1.2. Previous beliefs (among which prejudices)

The social world doesn’t come to us as a tabula rasa, but as a complex landscape, with a variety of statuses, positions, hierarchies and various kinds of cultural information. All our “folk sociology”[10] plays a role in assessing the default’s reliability of our informants. The heuristics Jamal uses to assess the reliability of his informant - the presenter - is a folk-sociological heuristic of the kind I am listing here: “Do not trust someone who comes from the slums”. Prejudices are heuristics of this kind. They can have a strong perceptual component, as Fricker points out (such as skin color, accent, or visible signs of belonging to a certain cultural identity), but not necessarily. I remember I came up with a lot of interesting generalizations about the character of Estonian people when I met a colleague who came from the University of Tallin, to end up discovering he was from Ecuador, working in Estonia. Many inferences can be more folk-psychological, like the fundamental attribution error, a well-known bias in social psychology according to which people tend to attribute causes of behaviour to the character of a person instead of attributing to the situation they observe[11]. They all work as “quick and dirty” heuristics to assess the credibility of an interlocutor.

1.3. Acknowledged expertise

Our assessment of the subject’s reliability can be based on her past record, and her earned expertise on a certain subject matter. Sometimes we control these records, like for example when we trust a friend who has been trustworthy in the past. Sometimes we base our own assessment of other people’s expertise on other people’s judgments and on various reputational cues I will analyze in a separate section.

2. Inferences on the content’s reliability

Some contents are straightforwardly credible, like a self-evident sentence such as: “I am not illiterate”. Some others are impossible to believe such as someone saying: “I am mute”. Thus content provides cues for credibility[12]. Of course, in most cases, cues for credibility are more complex to detect and require an inferential effort on the side of the hearer.

2.1. The pragmatics of trust

In previous works have argued for a “pragmatic” approach to trust, that is: the dose of default trust we are willing to invest in an informational exchange follows some pragmatic rules. In line with Gricean and post-Gricean approaches to pragmatics[13], I claim that our default trust is related to the relevance of what the speaker says: If she is not relevant to us, that is, the effort required to interpret what she says is not balanced with the cognitive benefits we may acquire by listening to her, our default trust decreases. Relevance of what is said is an important cue of content reliability. In previous works, I gave some examples of this adjustment between relevance and trust. Take the case of a mother who is systematically late to pick up her children at school. The teacher is really annoyed about this. On day she feels so guilty of being late, that she comes up with a complex justification for the teacher: “I was in the underground and it was stopped for half an hour for a technical problem, then, while I was going out, I met an old friend who told me he had been seriously ill…” The teacher finds the explanation too long: too many details that are not relevant for her. Her default trust decreases as long as the mother goes on with her complex story[14].

2.2. Sound arguments as a cue for trustworthiness

Mercier and Sperber (2011) have argued that the very capacity of human reasoning has a function of epistemic vigilance, that is, to produce convincing arguments whose structure is credible for the hearer[15]. We make inferences about the reliability of content, sometimes biased inferences, as in the case in which we judge more reliable a piece of content because it confirms what we already believe (a bias psychologists call “confirmation bias”).

To sum up, content bears signs of credibility that depends not only on evidence, but on the way it is structured and transmitted.

3. Internalized social norms of complying to authority

3.1. Deference to authority

Our cognitive life is pervaded of partially understood, poorly justified, beliefs. The greater part of our knowledge is acquired from other’s people spoken or written words. The floating of other people’s words in our minds is the price we pay for thinking. Most of the time we accept these beliefs in a “deferential” way: that is, even if we do not understand them perfectly, we think we have reason to accept them because the speaker is an authority for us. Consider this case. At high-school in Italy many years ago I heard my teacher of Latin say: “Cicero’s prose is full of synecdoches”[16]. I had a vague idea of what a synecdoche was, and ignored until then that one could characterize Cicero’s writing in this way. Nevertheless, I relied on my teacher’s intellectual authority to acquire the belief that Cicero’s prose is full of synecdoches, and today have a more precise idea of what my teacher was talking about. Or consider another example. I used to believe that olive oil is healthier than butter as a condiment. This was for me commonsense and I never thought it was possible to challenge it, since I moved from my country of origin, Italy, to France, and I realized that my stubbornness in believing this was based on ancient and almost unconscious commitments to the authority of my culture, deference to my mother and grandmothers, not on any particular evidence[17].

One of the main reasons we have for trust someone or strongly hold a belief is related to our deferential relations to various authorities, such as family, culture, mentors, institutions, etc. For example, many “folk-epistemological” beliefs about food, healing practices, education, are entrenched in ancient loyalties we are committed to since our childhood. These “unreal loyalties”, as Virginia Woolf used to call them, are very difficult to challenge: We pay a price to our identity when we change our mind about them.

3.2. Natural pedagogy

Some psychologists claim that we are “wired” to learn cultural information from authorities, and trust them even when we do not fully understand what they are telling us[18]. Contexts of cultural learning would then be those in which we trust someone because of his or her position of epistemic authority on us, independently of the nature of the content of the information we are about to acquire. Csibra and Gergely (2009) call this disposition a “natural pedagody”, that is, a capacity to allocate a higher default trust to some informants in accordance to the position of authority they have towards us. This is typical of learning situation, especially in infancy: in this perspective, children should not be seen as gullible: their disposition to trust is an heuristic that allow them to learn faster than what they could learn by trials and errors or by trying to wholly understand the meaning or the function of what they are learning.

3.3. Conformism

Conformism is not only the vice of those who uncritically conform to the custom, but also a powerful cognitive strategy to minimize risks in accepting new information. In social psychology, the study of influence shows that people tend to comply with the dominant view[19].

Evolutionary social psychologists such as Boyd, Richerson and Henrich, have more recently argued that conformism could be an evolutionary strategy: they claim that humans are endowed with a “conformist bias” that make them easily adopt the dominant views, thus quickly acquiring the social information available[20]. The “Twitter Mind” of followers/leaders is built on a robust heuristics that make us spare time in gathering reliable information: the more a belief spreads and is held by authorities, the safer is to trust in it.

3.4. Social Norms

Trust and trustworthiness are also claimed to be social norms[21]. There are people, institutions or practices that we are expected to trust if we comply with the norms of our group. Bicchieri et al. (2011) have experimental evidence that show that our normative expectations on other people’s behaviour are on trustworthiness and not on trust: we do not expect others to trust, but to be trustworthy in some circumstances[22]. There are nonetheless cases of epistemic trust that are based on normative expectations on both sides: speakers are expected to be trustworthy and hearers are expected to trust. Take the publishing format of peer-reviewed journals: given that papers that appear in those journals are inspected by reviewers before publication, we, as members of the academic community, are expected to trust peer-reviewed journals and colleagues who publish on them are expected to be trustworthy (i.e. not cheating on data, not plagiarize other authors etc.)[23]. Credibility economies thus can be regulated by social norms of trust and fairness we are willing to comply to in an epistemic community.

4. Socially distributed reputational cues

Reputation is usually defined the social information our patterns of actions leave around[24]. It can crystallize through time into “seals of approval” or “seals of quality” that become easily readable labels of reliability. In a narrower, epistemic sense, a reputation is the set of social evaluative beliefs that have been cumulated around a person, an item or an event. For example, a doctor’s reputation is the set of positive or negative beliefs that have spread around her conduct. The way in which these reputations are formed and maintained is very context-dependent and is also influenced by the formal dynamics of social networks, that are often used to model reputation. For example, in the case of doctors, the fact of being the doctor of a well considered friend raises the reputation of the doctor as a “good doctor”.[25] Reputations have an epistemic value as a cue of reliability. They are social information that can be used as a proxy of trustworthiness. In absence of direct information we rely on these proxies to assess the reliability of a person, an item or a piece of information[26]. The use of reputational cues as social cues of trustworthiness and the dynamics that affect, positively or negatively, reputations, as well as the biases that underlie these dynamics (i.e. notorious effects such as the well known Mathiew effect in social networks) are an exciting new field of research that has a potential huge relevance for the understanding of the credibility economies.

5. Robust signals

Among the strongest cues of reliability we use to assess the credibility of persons and items of information, there are robust signals, that is, signals that are difficult to fake. The theory of signalling is a body of work in evolutionary biology, economics and sociology[27] that interprets some animal and human behaviours and strategies as signals aiming at communicating information about oneself to the social surrounding. The most reliable signals are those that are difficult to fake, that is, that are too costly to be easily mimicked by others who don’t possess the quality the signals are supposed to indicate. The best way to seem someone who speaks English with an Italian accent is to be Italian. The best way to signal that you are a credible scientist is to be credible: publishing in a top journal such as Nature is a robust signal because it is very difficult to fake. In communication, we make use of robust signals as proxies to assess the reliability of our informants. Of course, the “robustness” of robust signals can be faked as well sometime, and the criteria on which we judge a signal as costly may be manipulated. But it is a strong heuristics we rely upon.

6. Emotional reactions

Sometimes we trust others for no reason, just because something deeply emotional tells us to let us go and give our confidence to a stranger. That old woman who reminded me of my grandmother and to whom I confided my pains of love in a train; that young, smiling boy whom I asked to take care of my kids while I was trying to fix the car in the middle of nowhere in Sicily… No reasons, no heuristics: just a strong, emotional reaction that here we can let ourselves go. Well-known experiments by the psychologists Willis and Todorov (2006)[28] found that a fleeting exposure to faces was sufficient for forming judgments of trustworthiness, and that further exposure simply reinforced the rapidly formed judgments. The “first impression” effect is something we often use in assessing the credibility of others and it is based on deep emotional reactions.

7. Moral commitments

It is difficult not to trust those towards whom we feel a normative commitment, as it is difficult not to honour their trust. As in the case 3 - trust as a social norm - sometimes we trust others because we are morally committed to them, even if we do not have reasons to trust them. A mother who trusts her addicted child not to take drug anymore just because she has promised to trust her, doesn’t have any evidence on which she bases her trust, but she is morally committed through her promise. A child who believes her father to be sincere and honest may trust him just because she is morally committed to respect a certain hierarchical relation with him, therefore not to challenge his ideas and point of view. Sometimes challenging our moral commitments to trust some people and ideas in our community can have a too high price and disrupt some fundamental beliefs in our identity. That is why it is so difficult to get rid of these commitments, these “unreal loyalties” as I called them, using Virginia Woolf’s expression, in § 3.

Conclusion

Credibility economies are regulated by many different biases, institutional processes and cognitive strategies that end up in distributing trust among the participants into a communicational exchange. I have tried to detail some of these mechanisms, and claimed that the central epistemic injustice, that is, testimonial injustice based on prejudice that Fricker is pointing to in her book, is one among this variety of mechanisms. Trust is an epistemic commodity. The dose of trust and distrust that makes us cognitively fit to our societies is a nuanced mixture of sound and unsound heuristics we should be more aware of.

References

Bicchieri, C. (2006) The Grammar of Society, Cambridge University Press.

Bicchieri, C.; Xiao, E.; Muldoon, R. (2011) “Trustworthiness is a social norm, but trust is not” , Philosophy, Politics and Economics, 00, pp. 1-18.

Boyd, R. and Richerson, P. J. (1985) Culture and the Evolutionary Process, Chicago: Chicago University Press.

Clark, K.H. (2010) Reputation, Oxford University Press.

Coleman, J. (1990) Foundations of Social Theory, Harvard University Press, part II, ch. 12.

Csibra, G. and Gergely, G. (2009) “Natural pedagogy”, Trends in Cognitive Sciences, 13, 148–53.

Elster, J. (1979) Ulysses and the Syrenes, Cambridge University Press.

Gambetta, D.; Bacharach, M. (2001) “Trust in signs”. In K. Cook (ed.) Trust and Society, New York: Russell Sage Foundation, pp. 148–184.

Henrich, J. and Boyd, R. (1998) “The evolution of conformist transmission and the emergence of between-group differences”, Evolution and Human Behavior, 19, 215–241.

Mercier, H.; Sperber, D. (2011) “Why Do Humans Reason? Arguments for an Argumentative Theory”, Behavioural and Brain Sciences, 34, pp. 57-111.

Origgi, G. .“Croire sans comprendre”, Cahiers de Philosophie de l’Universite de Caen, 2000.

Origgi, G. (2004) “Is trust an Epistemological Notion?” Episteme, 1, 1, pp. 1 -15.

Origgi, G. (2008) “What is in my Common Sense?” Philosophical Forum, 3, pp. 327-335.

Origgi, G. (2008) “A Stance of Trust” in M. Millàn (ed.) (2008) Cognicion y Lenguaje, Universidad de Cadiz, Spain, ISBN: 9788498281873.

Origgi, G. (2010) “Epistemic Vigilance and Epistemic Responsibility in the Liquid World of Scientific Publications”, Social Epistemology,

Recanati, F. “Can We Believe What We Do Not Understand?” Mind and Language, 1997

Willis, J., & Todorov, A. (2006) “First impressions: Making up your mind

after a 100-ms exposure to a face” Psychological Science, 17, 592–598.

Sperber, D. et al (2010) “Epistemic Vigilance” Mind and Language, 25, pp.

Zahavi, A. (1977) “The cost of honesty”, Journal of Theoretical Biology 67: 603-605.



[1] Cf. B. Kaysing (1974/1981) We never went to the moon, Desert Publications, Cornville, Arizona. I thank Achille Varzi for an insightful conversation at Columbia University, NY, in May 2010 about trust and the case of Moon landing.

[2] On the notion of epistemic vigilance, see D. Sperber, G. Origgi et al (2010) Epistemic Vigilance, Mind and Language, vol. 25.

[3] On the notion of epistemic trust see Origgi (2004, 2008, 2011).

[4] Cf. G. Origgi (2010) Default Trust and Vigilant Trust, paper presented at the EPISTEME conference, Edinburgh, June 19th, 2010.

[5] Cf. Sperber et al. cit (2010).

[6] Cf. Personal communication with Alvin Goldman after my presentation of the same example during the EPISTEME workshop in Edinburgh, June 19th 2010.

[7] On the idea of “Epistemic Responsibility” see G. Origgi, this journal: “Epistemic Vigilance and Epistemic Responsibility in the Liquid World of Scientific Publications”.

[8] In J. Bowell (1791) The Life of Samuel Johnson, book V.

[9] Cf. On this point Richard Foley, (2001) Intellectual Trust in Oneself and Others, Cambridge University Press.

[10] On the notion of Folk Sociology, cf. L. Kauffman & F. Clément (2007) “How Culture comes to Mind” Intellectica, 2, 46, pp. 1-30.

[11] Cf. L. Ross, R. Nisbett (1991) The Person and the Situation, MacGraw Hill, New York.

[12] Cf. Sperber (2010) cit., p. 374.

[13] Cf. P. Grice (1990) Studies in the Ways of Words, Harvard University Press; D. Sperber, D. Wilson (1986/95) Relevance: Communication and Cognition, Basil Blackwell, Oxford.

[14] Cf. G. Origgi (2008) “A Stance of Trust” in M. Millàn (ed.) (2008) Cognicion y Lenguaje, Universidad de Cadiz, Spain, ISBN: 9788498281873.

[15] H. Mercier, D. Sperber (2011) “Why Do Humans Reason? Arguments for an Argumentative Theory”, Behavioural and Brain Sciences, 34, pp. 57-111.

[16] This example is a reformulation of a Francois Recanati’s example in his paper: “Can We Believe What We Do Not Understand?” Mind and Language, 1997, that I have discussed at length in another paper: “Croire sans comprendre”, Cahiers de Philosophie de l’Universite de Caen, 2000. The problem of deferential beliefs was originally raised by Dan Sperber in a series of papers: “Apparent Irrational Beliefs”, “Intuitive and Reflexive Beliefs” Mind and language, 1997.

[17] On commonsensical beliefs and deference to authority, cf. G. Origgi (2008) “What is in my Common Sense?” Philosophical Forum, 3, pp. 327-335.

[18] Cf. Csibra, G. and Gergely, G. (2009) “Natural pedagogy”, Trends in Cognitive Sciences, 13, 148–53.

[19] For a classical study on this issue, cf. S. Milgram (1974) Obedience to Authority, an Experimental View, New York, Harper and Row.

[20] Cf. Boyd, R. and Richerson, P. J. (1985) Culture and the Evolutionary Process, Chicago: Chicago University Press; Henrich, J. and Boyd, R. (1998) “The evolution of conformist transmission and the emergence of between-group differences”, Evolution and Human Behavior, 19, 215–241.

[21] Cf. J. Elster (1979) Ulysses and the Syrenes, Cambridge University Press; C. Bicchieri (2006) The Grammar of Society, Cambridge University Press.

[22] C. Bicchieri, E. Xiao, R Muldoon (2011) “Trustworthiness is a social norm, but trust is not” , Philosophy, Politics and Economics, 00, pp. 1-18.

[23] For an analysis of epistemic vigilance in the case of scientific publications, see G. Origgi (2010), cit., this journal.

[24] Cf. J. Coleman (1990) Foundations of Social Theory, Harvard University Press, part II, ch. 12.

[25] For a social network analysis of reputation, see Kenneth H. Clark (2010) Reputation, Oxford University Press. For an epistemology of reputation, see my “Designing Wisdom Through the Web: Reputation and the Passion of Ranking” in H. Landermore, J. Elster (eds) (in press) Collective Wisdom, Cambridge University Press.

[26] Swedberg defines proxies in the following way: “Human beings are able to make important judgments about some topic X, by relying on some proxy sign or proxy information about X, that we may call Y.1 In some cases we go so far as to base our acts exclusively on Y, assuming then that it properly reflects X. This means that we have confidence in Y” (Swedberg, 2010)

[27] Cf. A. Zahavi (1977) “The cost of honesty”, Journal of Theoretical Biology 67: 603-605; D. Gambetta, M. Bacharach (2001) “Trust in signs”. In K. Cook (ed.) Trust and Society, New York: Russell Sage Foundation, pp. 148–184. One could claim that the sociological 1899 classic by Thorstein Veblen, The Thory of the Leisure Class, in which he advances the hypothesis that conspicuous consumption is a way of signalling status, lays the foundation of the contemporary signalling theory.

[28] Cf. Willis, J., & Todorov, A. (2006) “First impressions: Making up your mind

after a 100-ms exposure to a face” Psychological Science, 17, 592–598.