The Features and Adaptiveness of Human Social Cognition, and the Epistemic Norms implied for Individuals
Abstract
I begin this thesis by reviewing the facts about human social cognition and its most important features. Like many social and cultural animal species, humans outsource some of their cognition to other agents. But unlike other species, our social cognition is distinctive in its being deeply social― no individuals grasp the body of collective knowledge in its entirety, i.e. it is partially opaque to all― and cumulative, i.e. we build on top of each other’s knowledge. I then analyze the possible ways in which these distinctive features may have been adaptive for humans. I argue that it is precisely by tolerating partial ignorance that we were able to achieve a body of collective knowledge that exceeded the cognitive limits of individuals, i.e. what could be discovered, learned or processed by any individual. This, in turn, gave us the superior knowledge and technology that was our decisive competitive advantage. Lastly, I discuss the epistemic norms that these features of human social cognition and their adaptiveness may imply for individuals. I argue that there is a distinction to be made between what is adaptive ex post for the human species and what is adaptive ex ante for an individual, and that the former does not always imply the latter.
1. Introduction
We live in a complex world. We would like to make decisions well-informed about this world in ways relevant to whatever ends we may have. Given the complexity of our world, the knowledge base required to make such well-informed decisions, in many if not most cases, exceeds our individual ability to learn or process. It is an unavoidable feature of the human condition, and the condition of any living organism for that matter, that we are cognitively finite beings, navigating in an infinitely complex and comprehensive reality. To successfully achieve any given end we set our minds to, we must acquire and grapple with a bare minimum of knowledge of this reality. This essential tension, between the finite amount of knowledge that each individual can personally learn, and the infinite amount of knowledge that exists and can be learned, implies that we cannot avoid being dependent on factors outside ourselves, at least to some extent, for our knowledge.1 It is by now well-established in the cognitive science and psychology literatures that humans are pervasively dependent on many external factors for our knowledge: we outsource much of our cognition to the external world. Many of our internal representations are very sparse in their content, and we 'revisit' the external world to 'reacquire' information for their further specification.
For example, experiments have shown that humans outsource their perceptual representations to the physical environment. Our internal visual representations are actually very sparse, and instead of storing detailed pictures in memory, we 'reacquire' visual information from the world on a need-to-need basis. We look again at the world for more detail on a given feature as we need it (Chater 2018, Rayner 1998, Simons & Chabris 1999). In the same way, our beliefs tend to be very sparsely specified, and instead of storing detailed beliefs and their grounds in our memory, we tend to reconstruct beliefs on a need-to-need basis, based off of cues from the external world (Evans 1982, Rieznik et al. 2017, Johansson et al. 2005, Hall et al. 2012). The same is true for our memories, which are sparse and extremely fallible. Memory retrieval is actually sensitive to the circumstances surrounding it, so that memories are not purely records of the past, but in part reflections of the present (Loftus 2003, Schacter 1996). Outsourced cognitive functions can be identified by the sparseness of their corresponding internal representations. The less information we possess internally that we use, the more of it must have been outsourced.2 “[O]ur belief states are often much less rich and much less stable states than we would have guessed. While of course we have internal belief states, they are remarkably shallow. When we ask ourselves what we believe, we look as much to the world (especially the social world) to answer the question as to our own internal states.” (Levy 2021, 71)
In this paper, I examine a subset of human cognitive outsourcing, namely our outsourcing of cognition to other agents, that is, social cognition. I begin by reviewing the most important facts and features of human social cognition, as gleaned from various scientific literatures e.g. cognitive science, psychology, cultural evolution, anthropology. After describing the most important features of human social cognition, I analyze the ways in which they may have been adaptive. That they were adaptive is beyond doubt: Humans outcompeted all other species through our superior knowledge and technology, so our epistemic strategy must have gotten some thing(s) right. But precisely which features of our epistemic strategy were adaptive, and how, is less clear. In attempting to find this out, I combine my own conceptual analyses with relevant facts and explanations from existing scientific works. Lastly, having described the features of human social cognition, and analyzed the potential ways in which they were adaptive, I discuss what all this practically implies for individuals who want to optimize their epistemic performance. This ties back to the motivation of all of us to make well-informed decisions relative to our goals. Given that the described human epistemic strategy was adaptive for our species, what does this imply by way of an epistemic strategy adaptive for us as individuals? Let us begin, then, with the descriptive review of the facts about human social cognition.
2. The Features of Human Social Cognition
Testimony is one of the most pervasive mechanisms by which we outsource cognition to other agents and depend them or our knowledge (C. A. J. Coady 1992, Lackey & Sosa 2006, Levy 2022b) We know things far off in time and space from us through other people's explicit testimony, e.g. written historical accounts and social media posts by people in other countries. We also rely on implicit testimony. We glean unarticulated assumptions from people's behavior, perspectives and framings. Stories implicitly encode norms and values. Default options are implicit recommendations (McKenzie et al. 2006, Carlin et al. 2013). What is not taken into consideration is implied to be not worth attending to. "All this knowledge is social: it’s conveyed to us by others and largely taken on trust. For much of what we know about the world, we are deeply dependent on others." (Levy 2021, 50) Humans are cultural animals in that we acquire knowledge through and transmit it to each other, horizontally or vertically (between peers or unequals e.g. elders to children). Culture is an adaptive mechanism, even if it is not a genetic one. Evolution is natural selection for adaptive traits, regardless of whether these traits are genetic or not. Culture affects our behaviors, capacities, knowledge, technologies, all of which obviously affect our biological fitness.
Culture is not unique to the human species. Chimpanzees and orangutans, for example, teach each other various food acquisition techniques, and different groups maintain different traditional methods (Schaik et al. 2003), since these methods are inherited cultural knowledge rather than genetically encoded abilities. Obviously, humans differ from other cultural animals in the greater complexity of the knowledge that we transmit to each other. This may in part explain why human children remain dependent on caregivers so much longer than other species. Besides physical development, the period of dependency is also used on apprenticeship in the local culture for cultural species. The more complex and comprehensive the culture, the longer the apprenticeship must last. An often used metaphor is that genes are the hardware, and culture is the software. Genetic adaptations do not take long to develop since one is born with the relevant hardware 'pre-installed'. Cultural adaptations, however, must be ‘downloaded’, and humans take so long to reach adulthood because the ‘software packages’ that we need to download are very large. That culture constitute a large proportion of our epistemic strategy also explains why human children are very malleable, i.e. programmable. They can acquire/download a great diversity of cultures/software, and thereby become equipped for a great variety of environments. Furthermore, humans are particularly tolerant of the young, allowing them to observe and imitate our behaviors (Heyes 2018). Long dependency, early malleability and tolerance of the young are all plausibly adaptations to the acquisition and transmission and of culture.
Division of labor is another way in which we depend on others for our knowledge. We as individuals specialize in certain things, while outsourcing other things to other individuals. Sometimes we outsource some steps in the knowledge generation process to others. This is for example the case when a biologist outsources the data analysis step to the statistician in their team (not that they all do this). Other times we outsource the entire knowledge generation process to others and take their word, i.e. testimony, for the outcome. That is, we rely on them to believe things on our behalf and believe whatever they believe. This is the case for example when a layperson defers to an expert's claim. The layperson has absolutely no personal knowledge of the research process that led the expert to their conclusion, and has thus outsourced the entire process to the expert. In cases where we outsource the entire knowledge generation process, basically outsourcing entire beliefs, what we believe tends to be indeterminate, unspecified in substantive content. Instead, we merely have placeholders, pointers to where the authoritative substantive content can be found ad hoc, what Rabb et al. (2019) calls metaknowledge, "pointers to repositories of information that fill in the gap.” This can for example be an expert whose opinion to trust and defer to on that subject, which we can call upon when we actually need to use the concept in a more specified manner. Our beliefs in such cases is only specified with higher-order content, such as "I believe whatever A believes". Indeterminate, unspecified beliefs are not restricted to scientific subjects. Many moral and political concepts that people hold like ‘freedom’ and ‘brotherhood’ are also indistinct (Schwitzgebel 2009), and their further specification is outsourced.
In choosing who to outsource beliefs to, i.e. whose knowledge claims to defer to, humans make extensive use of social referencing. That is, we are sensitive and responsive to evidence about the sources of the claims, to cues of their trustworthiness and reliability. We tend to align our beliefs with those who appear to be like us (in e.g. sharing our values), benevolent towards us and prestigious (P. Harris 2012, Mascaro & Sperber 2009, Sperber et al. 2010) . We also tend to align our beliefs with what we perceive to be the consensus among such people, and are sensitive to cues for consensus (P. Harris 2012). We also tend to align our beliefs with those we come in contact with most often (exposure effect), and what we perceive to be the consensus skews towards the views of those we are most exposed to (Weaver et al. 2007). The use of social referencing in belief-formation is not unique to any group. People from across the spectrum, political, ideological, religious, cultural and otherwise, all make use of social referencing. They merely differ in who they find to be similar to them, benevolent towards them, prestigious etc. “[Human] knowledge production is in essential respects a product of the distribution of cognitive labor” (Levy 2021, 58), “epistemic labor distributed across space, time and across agents (...) Given that we’re epistemically social animals, it’s largely through deference [, and mechanisms of epistemic deference such as social referencing,] that we come to know about the world and generate further knowledge.” (Levy 2021, xii)
Deeply Social Cognition
Humans' extensive use of social referencing is a manifestation of a feature distinctive of human social cognition, that of being deeply social. Levy distinguishes between shallowly and deeply social (cultural) knowledge. The first he defines as that which is "within the cognitive grasp of a single individual" (Levy 2021, 45). Deeply social knowledge, by contrast, is "never fully grasped by any individual" (Levy 2021, xvi). A deeply social body of knowledge is partially opaque to all individuals who inherit and participate in it. That human cognition is deeply social is easily seen in the extent to which humans divide cognitive labor. Humans tend to divide knowledge up into very narrow fields, for each person to be able to specialize to such great depths that one person cannot easily acquire the expertise of another. Since we cannot acquire others’ expertise for ourselves, we are forced to defer to and trust each other in our respective specialties. To defer or to trust is precisely to adopt a belief (or behavior or other form of knowledge) without personally possessing the grounds for that belief. Social referencing is a manifestation of deeply social knowledge, because the former is the behavioral implementation of the latter: Social referencing is a social mechanism of belief-formation in the face of partially opaque knowledge. Users of social referencing defer based on higher-order evidence rather than first-order evidence, since the latter is (at least partially) opaque to them. It is because the first-order evidence is (at least partially) opaque to people that they need to make use of higher-order evidence instead. They cannot personally adjudicate between various claims based on the first-order evidence, so they must adjudicate between them based on the higher-order evidence about the sources of these claims, deferring to the claim with the most trustworthy source.
Overimitation is another behavioral disposition observed in humans (Heyes 2018) that is most plausibly an adaptation to deeply social cognition. Most cultural animals imitate. But they tend to imitate only behaviors that seem to them instrumentally rational relative to their goals. One experiment, for example, showed that chimpanzees would independently analyze the causal mechanisms at play and imitate only those behaviors that they could immediately see were instrumentally rational. (Nagell, Olguin, & Tomasello 1993). But humans, both children and adult, are disposed to imitate even those behaviors that seem to them redundant or even irrational relative to given goals (Lyons et al. 2007, Flynn & Smith 2012). We are inclined to imitate even behaviors we do not understand the reasons for. But while we may be less discriminating than chimps with regards to the instrumental rationality of the behavior itself, we are more discriminating and selective with regards to the social cues surrounding the behavior. For one thing, we imitate only those behaviors that seem to us to be intentional on the part of the model (Meltzoff 1988, Gergely et al. 2002). In doing this, we are trying to distinguish those behaviors that are constitutive of our culture from merely incidental behaviors, so that we can imitate only the culture-constitutive elements. For another, we are more selective in our choice of who to imitate, and use social referencing in this connection. For example, we tend to imitate successful individuals, the so-called prestige bias (Chudek et al. 2012, Henrich & Gil-White 2001). We also tend to imitate those who seem to embody the local norms, the so-called conformist bias, as we want to learn to fit into the society in which we live (Henrich & Boyd 1998). Both children and adult humans default to overimitation when success seems to them to depend on following a precise protocol, the causal mechanisms of which are unknown to them, and their demonstrators seem to them to be experts (Acerbi & Mesoudi 2015).3 We also tend to overimitate in response to ostensive communication on the part of the demonstrator (e.g. “look carefully at what I’m doing right here”) and cues of conventionality.
All this evidence supports the claim that overimitation is an adaptation to deeply social cognition, i.e. knowledge partially opaque to oneself. Overimitation is the behavioral implementation of deeply social knowledge. When we overimitate, we are imitating despite not knowing the reasons behind such behaviors, that is, we are practicing knowledge that is partially opaque to us. The maintenance and perpetuation of deeply social knowledge requires overimitation. By definition deeply social knowledge is partially opaque to all of its inheritors-participants. If these inheritors-participants are not inclined or willing to maintain knowledge partially opaque to them, then this knowledge will not be maintained, i.e. it will be lost. Without people being inclined and willing to overimitate, deeply social knowledge would be lost. Mechanisms of epistemic deference in general, including social referencing and overimitation, are necessary for deeply social knowledge. When a body of knowledge is partially opaque to its practitioners, they are forced to trust, to defer without acquiring or possessing the complete reasons behind it for themselves. Since human knowledge is indeed deeply social, as opposed to other cultural species, this explains “why we’re less discriminating in what we copy and less disposed to analyze imitated behaviors than chimps.” “ (Levy 2021, 46)
Cumulative Cognition
There is another important feature distinctive of human social cognition and culture: we build on top of each other's knowledge (Tennie et al. 2009). We do not merely receive and then maintain each other's knowledge unchanged. We innovate upon each other's knowledge. This is called cumulative culture. No other species does this, or not nearly to the same extent as humans. The relation of deeply social knowledge to cumulative culture is subtle. Cumulative culture and deeply social knowledge are conceptually distinct. Cumulative culture is knowledge that builds on top of other received knowledge. Deeply social knowledge is knowledge that is partially opaque to all individuals inheriting and participating in it. These two properties are distinct, and it is possible for a body of knowledge to exhibit either one or the other property (or neither, or both). To illustrate this point, conceive of a project where the cognitive labor is divided. Each individual carries out their own subproject, and the subprojects are then combined into a single body of knowledge. It is possible for this body of knowledge to be deeply social and non-cumulative. The individuals can work on their on subprojects without knowledge of the other individuals' subprojects, which makes the combined body of knowledge partially opaque to every participant (ie. deeply social). The subprojects can then be combined without having interacted with each other first, i.e. without building on top of each other. The combination would then simply cover a wider scope, but it would not have been cumulative. Vice versa, it is possible for the amalgamated body of knowledge to be cumulative and yet not deeply social. It can be cumulative in that the subprojects built off of each other's results. But it can still be shallow in that the combined whole body of knowledge remains cognitively simple and small enough for one individual to grasp completely. It is thus possible for knowledge to be cumulative and shallow.
But while it possible to separate cumulative culture and deeply social knowledge in principle, the prospects of doing so in practice are not great. It seems impossible in practice to prevent cumulative knowledge from eventually becoming at least partially opaque to all individuals. If knowledge keeps accumulating across generations, and growing with each new agent building more on top, the body of amalgamated knowledge must at some point exceed what individuals can learn and handle. That is, it must at some point exceed the cognitive limits of individuals, at which point the amalgamated body necessarily becomes partially opaque to all individuals. Another reason why cumulative culture tends to require deference, overimitation, or 'blind' trust in practice is that the originators of various cultural practices may have been long dead, and are hence unavailable to communicate the reasons behind them. In such cases trust (thereby maintaining these customs), or not trust (thereby stopping the perpetuation of these customs).4 Only by trusting and taking these customs for granted can one then build on top of them, i.e. only by deference is cumulative culture possible. And as a matter of fact, human cumulative culture has indeed been deeply social throughout most of human history and prehistory, and most definitely still is today. Much of the cultural knowledge acquired was (at least partially) causally opaque to the originators themselves, let alone their inheritors.5 Furthermore, the body of knowledge that accumulated across generations in various human cultures very quickly exceeded what any single individual could cognitively learn or handle. The knowledge was passed down and accumulated across generations. The inheritors then maintained this knowledge and practice, without knowing the scientific reasons why they worked. I will review the anthropological evidence for this later in this thesis.6
In summary, humans outsource cognition to the external world: we reacquire information from the world as needed to reconstruct a large proportion of our perceptions, memories and beliefs. Much of our cognitive outsourcing is to other agents, in the form of culture and division of labor. Many species engage in social cognition. But what is distinctive of human social cognition is that its process and product are cumulative and deeply social (partially opaque to all individuals). Implementing a deeply social and cumulative epistemic process requires mechanisms of epistemic deference. Thus, humans have evolved a host of behavioral dispositions for epistemic deference, such as social referencing and overimitation.
3. The Adaptiveness of these Features
Now that we have reviewed the facts about human cognition, and established the features distinctive of human social cognition, let us turn to the question of their adaptiveness. We mentioned that human culture is far more complex and comprehensive than that of any other cultural species. Perhaps this is a result of the cumulative and deeply social features of the human epistemic process. Had we not built on top of each other's knowledge and outsourced it deeply to other agents, the complexity and comprehensiveness of our culture may not have evolved beyond that achieved by chimpanzees. That would have hindered our ecological success as a species. One of humans' decisive advantages was that we acquired and used more knowledge (culture) than any other species. Given that humans thrived and utterly dominated in many diverse ecological systems, the features of the human epistemic process must have been adaptive. But exactly why and how they were adaptive is less obvious. In this section, I will analyze in what ways the features of human cognition might have been adaptive.
The first and immediately appreciable adaptive advantage of cognitive outsourcing in general, be it to other agents or the environment, is that it saves energy. When outsourcing, the organism does less of cognitive work itself, such as perceiving, memorizing, processing data, and thereby saves energy that would have been spent on these processes had they not been outsourced. Accessing the world itself (e.g. through sense perception) tends to be less costly than accessing internal representations of the world, among others because the latter also requires storage in memory. "[E]volved creatures will neither store nor process information in costly ways when they can use the structure of the environment and their operations upon it as a convenient stand-in for the information-processing operations concerned. That is, know only as much as you need to know to get the job done." (Clark 1997, 64) Besides saving energy, we cover more ground by using the world and others than we could on our own. Given that we are cognitively limited, whilst reality is infinitely complex, by outsourcing to reality, we leverage more knowledge than we could ever personally comprehend. In other words, by outsourcing, the knowledge that we make use of exceeds the knowledge that we can explain. Outsourcing brings more knowledge to our disposal than we can master. Outsourcing amplifies our practical abilities.
Of course, all this energy-saving and leveraging of the world's own greater scope and depth of information is only beneficial so long as the outsourced functions are sufficiently reliable (accurate) for the purposes of the organism. There is at least one way in which outsourcing cognition to the world tends to be more reliable than not: The world is its own most accurate representation. The state of the world updates constantly, and in full detail. Our stored picture could never equal the detail and updating-frequency of the world itself. We get a more accurate picture of the world by revisiting it than storing it in memory― provided our abilities to acquire new information from the world are sufficiently rapid and accurate! For outsourcing to work for us, our abilities to accurately acquire information from the world must be sufficiently accurate and rapid. For example, our perceptive faculties must be reliable enough. The social cues to which we defer must also yield sufficiently accurate representations of the world. This brings us to the question of whether our mechanisms for outsourcing cognitive work to other agents are sufficiently accurate to serve human purposes.
Anthropological evidence that humans thrived
The anthropological records from across the world show that humans thrived in a great variety of environments, from the Inuits in the Arctic, to the Masai in Africa, to the Aboriginals in outback Australia… So our epistemic strategy must have been (sufficiently) adaptive. But it is less clear precisely which features were adaptive and in what way. Levy argues that we owe most of our success to the social aspects of our cognition. For one thing, various studies show that collective deliberation tends to outperform individual deliberation (Mercier & Sperber 2017, Sunstein 2006, Surowiecki 2004). But more importantly, Levy argues, humans owe their overwhelming ecological success to the two aforementioned features distinctive of human social cognition, that of being deeply social and cumulative: "[W]e owe much of our success at colonizing a dizzying variety of environments to cumulative culture, which embodies valuable knowledge. This knowledge, I suggest, is deeply social: it’s the product of cognition distributed across many agents and across time, and it is never fully grasped by any individual" (Levy 2021, xvi)
Levy argues for this claim with anthropological evidence which seems to show that various thriving human populations had these two epistemic features in common: Many successful cultures, although different in substantive content due to the different requirements of their different natural environments, were observed to be both cumulative and deeply social in process. For example, the Arctic Inuits developed a host of special technologies, such as the manufacturing of warm clothes from various animal material, the building of kayaks and snow houses. (Boyd et al. 2011, Richerson & Boyd 2008) We know that their cultural knowledge was both cumulative and deeply social from natural experiments like the Franklin expedition (Boyd et al. 2011). An epidemic struck the Polar Inuit around 1820 and killed most of its older members. They lost much of their cultural knowledge during this episode, and could not reinvent the lost skills themselves. Their population suffered decline all the way up until they reacquired the skills from another group about 40 years later. That they couldn't rediscover the lost knowledge within a generation shows that Inuit culture must have accumulated across many generations.7 From the fact that the surviving members of the older generation did not possess all the cultural knowledge of those that died off, we can infer that there must have been a deep division of labor, where people specializing in one area did not possess complete knowledge of others' specialties.
The indigenous Americans developed methods of preparing corn with alkali to prevent pellagra (Henrich 2015). We know that their cultural knowledge was deeply social (partially opaque to them) from the fact that upon being asked why they prepared plants in the ways they did, they could not give an answer more elaborate than "it is our custom" (Henrich 2015). The scientific connection was discovered later, when Europeans started importing and consuming corn in non-traditional ways and suffering pellagra as a result. Many other peoples have cultural practices the adaptiveness of which were not understood scientifically until recently. For example, the plant detoxification methods of the Yandruwandha people (Burcham 2008), the avoidance of certain marine foods by pregnant women in Fiji (Henrich & Henrich 2010)... None of these later discovered scientific explanations were known to the indigenous peoples themselves.
Deeply Social Cumulative Culture is adaptive
When we discover common denominators among the successful, it is reasonable to suspect them of being potential causes of success. The anthropological evidence presented by Levy shows that the successful populations had cultures that were cumulative and deeply social. We can conceptually analyze these features for in-principle reasons why they plausibly outperform non-cumulative and shallowly social knowledge. Cumulative culture obviously outperforms non-cumulative culture in that it discovers and innovates further than non-cumulative culture ever could. Non-cumulative culture only ever transmits knowledge achieved by individuals without ever building on top. This implies that the knowledge achieved will never exceed what can be discovered by any single individual in their singular lifetime. Cumulative culture passes one lifetime's worth of discoveries down, so that the inheritor can begin at the former's end point, and build another lifetime's worth of discoveries on top. Cumulative culture saves us from having to reinvent the wheel, so that we can move on to inventing the cart instead. Not to mention coming up with a solution is so much more difficult and time-consuming than imitating a solution that one is taught. Besides saving us time and energy that we can spend on new questions, cumulative culture enables us to answer questions beyond the cognitive horizons of any individual, for example matters spanning longer than a human lifetime (observations must then be recorded across many generations), or spanning wider in space than the geographical location of given individuals (Shea 2009). Accumulation across time and space enables cognitive achievements beyond what any individual, generation or tribe can achieve.
Deeply social knowledge, defined as knowledge that is partially opaque to all individual that inherit and participate in it, can instantiate in different ways. Let me briefly review the taxonomy of the different manifestations of deeply social knowledge, because there are distinct advantages to each manifestation. The first taxonomic division is between bodies of knowledge that are deeply social merely in contingent fact, and those which are necessarily deeply social. The former is the case when it just so happens that all individuals in a given knowledge community in fact do not possess complete knowledge, and they in fact are willing to take many things on trust, but they could learn the complete body of knowledge for themselves if they wanted to. The latter is the case when individuals could not learn the complete body of knowledge for themselves even if they wanted to. This, in turn, can be for a variety of reasons.
The first reason is that not all knowledge claims are accessible in their entirety to individuals. There are facts which some individuals do not have full access to, and regarding these facts, they must trust in and defer to the testimony of others who do have full access. We all depend on other's testimony to acquire knowledge inaccessible to us temporally (past historical events we did not witness), spatially (geographical locations we cannot travel to to see for ourselves), perceptively (e.g. a blind person must trust the visual description of a sighted person) or cognitively (e.g. when there are many prerequisites to understanding, such as for an advanced mathematical proof. In such a case, the layperson can only trust in the mathematician's claimed result). Knowledge based on testimony is always partially opaque to the receivers of testimony. Precisely which parts are opaque and how depend on the kind of knowledge and testimony in question, but in all cases of testimony, a gap in knowledge inevitably exists, no matter how small, between the giver and receiver of testimony.8 So all knowledge based on testimony is in this way necessarily deeply social. We widen our scope of knowledge along all these various dimensions (temporal, spatial, perceptual etc.) when we trust and defer to testimony. Conversely, we restrict our scope of knowledge along these dimensions when we insist on learning everything personally.
Secondly, even if each and every claim were accessible in its entirety to all individuals, the complete set of such claims that make up a body of collective human knowledge far exceeds the cognitive limits of any individual. Individuals could not acquire, process or store the entire set even if they wanted to. The sheer size of the body of knowledge accumulated across multiple agents and generations must eventually exceed what an individual can learn (let alone discover or reinvent from scratch) within their lifetime. The adaptiveness of deeply social knowledge in this second sense, then, is that individuals who choose to trust and defer can use more knowledge than they personally know. They are leveraging more knowledge for practical purposes than they personally possess and can process, and are thereby benefiting from the greater processing power of all the individuals combined. “[Deference] allows us to acquire knowledge and practices developed by multiple individuals, individuals dispersed across space and time. It allows us to acquire, and then to build on, deeply social knowledge: adaptive behavior that could not have been developed by any individual de novo, no matter how gifted and insightful that person might be.” (Levy 2021, 44)
These advantages for individuals scale to the community level, in that the community can leverage the body of knowledge possessed by the community as a whole, the total of which, albeit dispersed across agents, is greater than the total body of knowledge possessed by any single individual. This brings us to the adaptiveness of division of labor, more specifically, the deep division of labor distinctive of humans. Humans divide labor so deeply, that each of us specialize to such depths that others cannot acquire our expertise for themselves even if they wanted to.9 So as individuals we are specialists. But we specialize in different things, so that together, we cover a broader scope of knowledge and thereby become a generalist species. In other words, it is thanks to deep division of labor that humans overcame the trade-off between depth and breadth― we became the jack of all trades and master of them all. Had we insisted on not trusting, not deferring, not going beyond knowledge we could acquire and handle for ourselves, we would have severely restricted the scope and depth of our knowledge, namely to within one individual's cognitive limits.10
To reiterate, individuals do not and cannot possess complete knowledge of the community's entire body of collective knowledge, because it most often exceeds their cognitive limits. Adding the knowledge of each and all community members together can amount to complete coverage of the body of collective knowledge― but it may also not. This brings me to the last reason I will consider, for why a given body of knowledge may be necessarily partially opaque to everyone. There are cases where some knowledge claims are used by a given community, but not yet explicitly known to any individual in the community. In such cases, adding the explicit knowledge of all community members together does not amount to complete coverage of the body of knowledge used by the community, which includes both explicit and implicit knowledge. The aforementioned indigenous practices of plant detoxification are a good example of this. Those peoples probably first acquired these practices by trial and error, and not because they understood the first-order chemical knowledge behind these practices. The chemistry behind these practices was thus known to no one in their community. The adaptiveness of leveraging this sort of deeply social knowledge is that we can put things to use before we as a community understand them explicitly. We amplify our practical knowledge at any given level of declarative knowledge. If we had to wait until we understood things explicitly before we put them to use, human technology would have progressed at a far slower pace. Another adaptive advantage of knowledge partially opaque to the community as a whole is that it is good as solving intrinsically hard problems, that is, cases where the relationship between an action and its effects are hard to discern, where there is a lot of noise relative to signal. This can be because the feedback is slow, probabilistic or confounded by many variables. Deeply social cumulative culture has often succeeded in solving such problems even without any statistical tools, again exemplified by the many incredible practices of plan detoxification aforementioned.
Before concluding this section, I want to make an observation about deeply social knowledge in general, as opposed to shallowly social knowledge. As a reminder, deeply social knowledge is behaviorally implemented among others in the form of automatic deference, overimitation and social referencing. Shallowly social knowledge is behaviorally implemented as independent attempts at critical thinking, causal analysis and reconstruction (of the causal chain and or steps involved in the matter in hand). Which 'mode' of cognition is more appropriate depends on the problem type and domain. “Second guessing the technique is appropriate when it’s a component of shallow cultural knowledge, like the behavioral traditions seen in other primates: when what is transmitted is an innovation that is within the cognitive grasp of a single individual, tinkering with it may easily reap rewards.” (Levy 2021, 45) Independent thinking is also suitable to domains that do not depend on exact replication, such as the artistic domain. These domains can tolerate the inevitable changes and mutations resulting from independent thinking and reconstruction, even benefiting from the creativity that results from such contributions. On the other hand, when success depends on a precise sequenceo steps, and the causal mechanisms behind them are opaque, independent attempts at reconstruction risk leaving out crucial components (Acerbi & Mesoudi 2015). Scientific and technological domains tend to require exact precision, and thus deference and overimitation. The various plant detoxification procedures require exactly so-and-so steps, none of which can be skipped.
Tailoring the epistemic approach to each problem's type may optimize the epistemic outcome of particular episodes of problem solving carried out by individuals. But it does not necessarily optimize the general epistemic performance of the entire species over time. In fact, by generally adopting the epistemic strategy of deeply social knowledge, humans inadvertently equipped themselves to better tackle problems of the complex type that exceed the cognitive limits of individuals, and thereby inadvertently did solve more of such complex problems. In other words, it is precisely because we humans evolved a general propensity to trust more, defer more and use knowledge partially opaque to us, that we acquired and accumulated knowledge more complex and comprehensive than any other species.11 It was because we decided to make use of knowledge claims beyond our individual cognitive horizons (temporal, geographic, perceptual), bodies of knowledge beyond our cognitive capacities (to acquire, handle, verify), and knowledge claims beyond those explicitly possessed by the community as a whole, that we became so technologically powerful and dominant as a species. We amplified our practical powers at any given level of declarative knowledge. “We might say that we owe our success to the fact that we are in some ways less—or at any rate less directly—rational animals than chimps. We defer to tradition (relatively) unthinkingly, in conditions in which they would analyze causal structure and innovate” (Levy 2021, 45). Obviously, it is undeniable that to make use of more knowledge than we explicitly possess comes with uncertainty― we are walking hand-in-hand with the unknown. But there can be no gain without risk. And looking back on our evolutionary history, the epistemic strategy of deep interdependence and division of labor that we bet on seems to have paid off.
In summary, outsourcing cognition is more adaptive than not because it tends to be less energy costly and tends to be more accurate. Social cognition (i.e. outsourcing cognition to other agents) is more adaptive than individual cognition because collective deliberation tends to outperform individual deliberation in empirical studies. Cumulative culture is more adaptive than noncumulative culture because a body of knowledge that accumulated, passed on and built upon must inevitably eventually exceed what any single individual can achieve in their lifetime. Deeply social knowledge is more adaptive than shallowly social knowledge because it enables human individuals and communities to leverage, to make use of, more knowledge than they explicitly possess. That is, it increases their practical power at any given level of declarative knowledge.
4. The Implied Epistemic Norms for Individuals
Now that we have reviewed the features of human cognition, and analyzed the ways in which they may have been adaptive for our species throughout our evolutionary history, let us examine which epistemic norms they may imply for individuals. What should an individual do, if they wish to maximize the number of true beliefs that they hold?12 Which epistemic strategy is the most reliable for them to adopt? To what extent should they rely on others, trust in and defer to others, as opposed to relying on themselves and doing their own research? Our evolutionary history would seem to imply that we should rely on others more, as it was precisely this deep interdependence that provided us our superior knowledge and technology. We should focus on developing our own specialty in depth, to increase our accuracy in our own field, and depend more on others in their fields of specialty, to leverage their increased accuracy in that field. Across his many writings on this subject, Neil Levy's prescription is consistent with this ethos: If the goal is knowledge, individuals ought to defer more to experts.13 He advises against independent research aimed at truth. Insofar as a consensus among experts exists, individuals should defer to it. When no such consensus exists, Levy also prescribes deference to experts, just with the further requirement that one first do research into the available higher-order evidence to decide which experts are appropriate to defer to.14
But more deference and interdependence lead to better epistemic results only if the network of epistemic institutions in which the individual finds themselves is in fact well-functioning (Hayward 2024, Levy 2022a, Levy 2022b, Coady 2003). There are various stances on what the requirements are for a well-functioning epistemic network, such as good feedback mechanisms to promote accuracy, a good incentive structure to promote honesty, mechanisms for detecting deception. But this is a question orthogonal to our current discussion. The important point for our discussion is that there are requirements for a well-functioning epistemic network, and that sometimes they are not satisfied. It is possible for an individual to find themselves in a badly functioning network. This is not merely a modal argument but a genuine possibility: badly functioning epistemic networks have existed in various human societies throughout history, in which individuals would have fared better not to defer. The Lysenkoism of Soviet scientific institutions is one example, the era of Galen hegemony in medicine another. Trust and deference basically amplify whatever quality an epistemic network already exhibits. If the network is more reliable than not, more trust spreads more reliable claims out to more people in society. But if it is more unreliable, then more trust spreads more unreliable claims out to more people in society.
Looking back on our evolutionary history, we observe the common denominators of cumulativeness and deep sociality among the successful human populations. But these features alone are not sufficient conditions for epistemic success. At the very least, the epistemic networks of these populations must also have been in fact well-functioning. We must take care not to fall prey to the survivorship bias. This is a cognitive bias where one does not account for the data concerning those who did not survive. It is entirely possible, even realistic, that human populations have existed which exhibited both cumulative and deeply social cognition, but which nonetheless died out because the cultural practices that they passed down were in fact maladaptive. What is adaptive ex post is not the same as what is adaptive ex ante. Just because something worked looking back does not mean it will work looking forward, in this particular place, time and for this particular individual. Furthermore, there is a difference between what is adaptive for the species as a whole and what is adaptive for the individual. More deference may be adaptive for the human species as a whole, since those epistemic networks that happen to be well-functioning will function even better upon more deference, whilst those that happen to be dysfunctional will function even worse upon more deference and die off. As a result, only the well-functioning networks survive, and so do their reliable knowledge products, which then spread out to the rest of the species. But again, individuals care about their own lives, and it will be poor consolation to them that the species thrives, if they happen to be one of the individuals culled off during this epistemic selection and elimination process.
All of these are perhaps very trivial points to make, but they seem to me to be underemphasized in Levy’s works on this subject. Levy acknowledges the possibility and historical precedence of dysfunctional epistemic networks, but claims that even in dysfunctional epistemic networks, individuals can expect to do better by deferring than doing their own research (Levy 2022a). Laypersons are so incompetent at assessing the first-order evidence for themselves, he argues, that trusting unreliable networks still probabilistically outperforms independent research. I completely agree with Levy’s assessment that laypeople have very low accuracy in the knowledge acquisition stage, and that experts by contrast have high accuracy in their narrow field of expertise. This large accuracy differential is the main reason why Levy prescribes more deference across his works on this subject. But I think that Levy underestimates how much moral dimensions like honesty and social responsibility affect reliability (Rolin 2020). Just because experts are highly accurate in acquiring knowledge for themselves does not imply that they are highly accurate in communicating that knowledge to others. Again, Levy does acknowledge that moral dimensions such as honesty and shared values matter for reliability (Levy 2022a, Levy 2022b, Levy 2021), but only very briefly in passing, and even then he downplays their significance.15 But even granting that Levy did not underestimate the moral dimensions of reliability, i.e. even supposing Levy’s probabilistic claim were true (that individuals can expect to fare better by deferring than doing their own research), this does not change the historical fact that epistemic networks have existed which were so defunct that less deference performed better. And since individuals care about their own lives, they care about what is most reliable to do in their particular situation. In other words, the general probabilistic prescription to defer more would not, in individuals’ own estimation, be adaptive for them, if their situation happens to be the exception to this rule.
What’s the practical upshot of all this? Does this mean that individuals should not follow any general rules, but instead assess case-by-case whether the epistemic network in which they find themselves is in fact trustworthy, to decide whether or not to defer more? Levy has stated that laypersons cannot reliably assess the quality of an epistemic network on their own, as this task is so difficult as to be a field of expertise in its own right. “[T]o identify the reliable epistemic authorities we may defer to in good conscience. This is a difficult and specialized task, and one on which individual cognition is no more reliable than on other difficult and specialized tasks. We can no more identify genuine epistemic authorities on our own than we can answer important scientific questions on our own. We are reliant on others—on those very institutions, and the institutions of civil society they inform—to identify them for us as the authorities to defer to, and we rely on the scientific community to keep them (that is, themselves) honest.” (Levy 2022a, 16). I agree with this assessment. What this means is that we are effectively left with an epistemic Russian Roulette. If our epistemic network is well-functioning, we do better to defer more. If it is dysfunctional, we do better to defer less and be more epistemically independent. However, we have no reliable way of finding out ex ante (before the fact, e.g. the collapse of a bad network) whether our network is functioning well or badly. Despite this state of affairs, Levy still leans towards a general prescription of more deference and trust.
My prescription, on the other hand, is: “Place your best bets, individuals!” That is, I neither prescribe more deference nor more independence as a general rule. I prescribe a case-by-case assessment and decision. I advise that individuals should decide, to the best of their knowledge and ability, whichever option they believe is more appropriate for their particular situation. To my thinking, a proper estimation of the genuine probability of a defunct epistemic network warrants a conclusion at least this inconclusive, uncertain and ambivalent. To tends toward more deference or independence is to overlook or underemphasize just how variable history and circumstances are. I suspect that living in a Western society may have made Levy more optimistic about the prospects of trust and interdependence (since Western societies arguably have relatively reliable epistemic networks), and thereby biased him towards a prescription of more deference. Undoubtedly, these ‘best bets’ that people place will be heavily influenced by epistemologically inappropriate factors, e.g. gut feelings, cognitive biases and existing beliefs that are unreliable and ideally speaking ought not belong in a belief-formation mechanism. But no belief-formation is perfect, and even Levy’s strategy of deferring more is susceptible to the same factors (although the prevalence of various factors may differ). My prescription for individuals to place their best bets is not to sidestep all the criteria of reliability, justification and knowledge discussed in the epistemology literature. It is rather a moral acknowledgement, acceptance, reminder and reaffirmation of the unavoidable, namely decision-making under uncertainty. What is intellectually trivial may be morally arduous.
“If old truths are to retain their hold on men’s minds, they must be restated in the language and concepts of successive generations. What at one time are their most effective expressions gradually become so worn with use that they cease to carry a definite meaning. The underlying ideas may be as valid as ever, but the words, even when they refer to problems that are still with us, no longer convey the same conviction; the arguments do not move in a context familiar to us; and they rarely give us direct answers to the questions we are asking. This may be inevitable because no statement of an ideal that is likely to sway men’s minds can be complete: it must be adapted to a given climate of opinion, presuppose much that is accepted by all men of the time, and illustrate general principles in terms of issues with which they are concerned.” ― Friedrich A. Hayek
References
- Acerbi, A., & Mesoudi, A. 2015. “If we are all cultural Darwinians what’s the fuss about? Clarifying recent disagreements in the field of cultural evolution,” Biology & Philosophy, 30/4: 481–503.
- Anderson, Elizabeth. 2011. “Democracy, Public Policy, and Lay Assessments of Scientific Testimony.” Episteme 8 (2): 144–64. https://doi.org/10.3366/epi.2011.0013.
- Ballantyne, Nathan. 2019a. “Epistemic Trespassing.” In Knowing Our Limits, by Nathan Ballantyne, 1st ed., 195–219. Oxford University PressNew York. https://doi.org/10.1093/oso/9780190847289.003.0008.
- Ballantyne, Nathan. 2019b. “Novices and Expert Disagreement.” In Knowing Our Limits, by Nathan Ballantyne, 1st ed., 220–45. Oxford University PressNew York. https://doi.org/10.1093/oso/9780190847289.003.0009.
- Boyd, R., Richerson, P. J., & Henrich, J. 2011. “The cultural niche: Why social learning is essential for human adaptation,” Proceedings of the National Academy of Sciences, 108/Supplement 2: 10918–25.
- Burcham, P. C. 2008. “Toxicology down under: Past achievements, present realities, and future prospects,” Chemical Research in Toxicology, 21/5: 967–70.
- Carlin, B. I., Gervais, S., & Manso, G. (2013). “Libertarian paternalism, information production, and financial decision making,” The Review of Financial Studies, 26/9: 2204–28.
- Chater, N. 2018. The Mind Is Flat: The Illusion of Mental Depth and the Improvised Mind. New Haven, CT: Yale University Press.
- Chudek, M., Heller, S., Birch, S., & Henrich, J. 2012. “Prestige-biased cultural learning: bystander’s differential attention to potential models influences children’s learning,” Evolution and Human Behavior, 33/1: 46–56.
- Clark, A. 1997. Being There: Putting Brain, Body and World Together Again. Cambridge, MA: MIT Press.
- Coady, C. A. J. 1992. Testimony: A Philosophical Study. Oxford: Clarendon Press.
- Coady, David. 2003. “Conspiracy Theories and Official Stories.” International Journal of Applied Philosophy 17 (2): 197–209.
- Coady, David. 2007. “Are Conspiracy Theorists Irrational?” Episteme 4 (2): 193–204. https://doi.org/10.3366/epi.2007.4.2.193.
- Dentith, M R. X. 2018. “Expertise and Conspiracy Theories.” Social Epistemology 32 (3): 196–208. https://doi.org/10.1080/02691728.2018.1440021.
- Dentith, M R. X. 2019. “Conspiracy Theories on the Basis of the Evidence.” Synthese 196 (6): 2243–61. https://doi.org/10.1007/s11229-017-1532-7.
- Dentith, M R. X. 2022. “Suspicious Conspiracy Theories.” Synthese 200 (3): 243. https://doi.org/10.1007/s11229-022-03602-4.
- Dentith, M. R. X., and Brian L. Keeley. 2018. “The Applied Epistemology of Conspiracy Theories.” In The Routledge Handbook of Applied Epistemology, by David Coady and James Chase, edited by David Coady and James Chase, 1st ed., 284–94. Routledge. https://doi.org/10.4324/9781315679099-21.
- Dentith, Matthew R. X. 2016. “When Inferring to a Conspiracy Might Be the Best Explanation.” Social Epistemology 30 (5–6): 572–91. https://doi.org/10.1080/02691728.2016.1172362.
- Douglas, H. 2000. “Inductive risk and values in science.” Philosophy of Science, 67(4), 559-579. https:// doi.org/10.1086/392855
- Douglas, Heather. 2021. “The Role of Scientific Expertise in Democracy.” In The Routledge Handbook of Political Epistemology, edited by Michael Hannon and Jeroen De Ridder, 1st ed., 435–45. Abingdon, Oxon; New York, NY: Routledge, 2021. | Series: Routledge handbooks in philosophy: Routledge. https://doi.org/10.4324/9780429326769-52.
- Elliott, Kevin C. 2017. A Tapestry of Values: An Introduction to Values in Science. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780190260804.001.0001.
- Evans, G. 1982. The Varieties of Reference. Oxford and New York: Oxford University Press.
- Flynn, E., & Smith, K. 2012. “Investigating the mechanisms of cultural acquisition,” Social Psychology, 43/4: 185–95.
- Frey, Bruno S, and Daniel Waldenström. n.d. “Contrasting Ex Post with Ex Ante Assessments of Historical Events.*.”
- Gergely, G., Bekkering, H., & Király, I. (2002). “Rational imitation in preverbal infants,” Nature, 415/6873: 755–755.
- Goldberg, S. C. 2010. Relying on Others: An Essay in Epistemology. Oxford: Oxford University Press.
- Goldman, Alvin I. 2021. “How Can You Spot the Experts? An Essay in Social Epistemology.” Royal Institute of Philosophy Supplement 89 (May):85–98. https://doi.org/10.1017/S1358246121000060.
- Hall, L., Johansson, P., & Strandberg, T. 2012. “Lifting the veil of morality: Choice blindness and attitude reversals on a self-transforming survey,” PLOS ONE, 7/9: e45457. DOI: 10.1371/journal.pone.0045457
- Hammond, Peter J. 1981. “Ex-Ante and Ex-Post Welfare Optimality under Uncertainty.” Economica 48 (191): 235. https://doi.org/10.2307/2552915.
- Hardwig, John. 1991. “The Role of Trust in Knowledge.” The Journal of Philosophy 88 (12): 693. https://doi.org/10.2307/2027007.
- Harris, Keith. 2018. “What’s Epistemically Wrong with Conspiracy Theorising?” Royal Institute of Philosophy Supplement 84 (November):235–57. https://doi.org/10.1017/S1358246118000619.
- Harris, P. 2012. Trusting What You’re Told. Cambridge, MA: Harvard University Press.
- Harris, R. 1978. “Ex-Post Efficiency and Resource Allocation Under Uncertainty.” The Review of Economic Studies 45 (3): 427–36. https://doi.org/10.2307/2297245.
- Hayward, Tim. 2024. “The Applied Epistemology of Official Stories.” Social Epistemology 38 (4): 425–45. https://doi.org/10.1080/02691728.2023.2227950.
- Henrich, J. 2015. The Secret of Our Success: How Culture Is Driving Human Evolution, Domesticating Our Species, and Making Us Smarter. Princeton, NJ: Princeton University Press.
- Henrich, J., & Boyd, R. 1998. “The evolution of conformist transmission and the emergence of between-group differences,” Evolution and Human Behavior, 19/4: 215–41.
- Henrich, J., & Gil-White, F. J. 2001. “The evolution of prestige: Freely conferred deference as a mechanism for enhancing the benefits of cultural transmission,” Evolution and Human Behavior, 22/3: 165–96.
- Henrich, J., & Henrich, N. 2010. “The evolution of cultural adaptations: Fijian food taboos protect against dangerous marine toxins,” Proceedings of the Royal Society B: Biological Sciences, 277/1701: 3715–24.
- Heyes, C. (2018). Cognitive Gadgets: The Cultural Evolution of Thinking. Cambridge, MA: Belknap Press.
- Johansson, P., Hall, L., Sikström, S., & Olsson, A. 2005. “Failure to detect mismatches between intention and outcome in a simple decision task,” Science (New York, N.Y.), 310/5745: 116–9.
- Kilov, Daniel. 2021. “The Brittleness of Expertise and Why It Matters.” Synthese 199 (1–2): 3431–55. https://doi.org/10.1007/s11229-020-02940-5.
- Kitcher, Philip. 2001. “Elitism, Democracy, and Science Policy.” In Science, Truth, and Democracy, by Philip Kitcher, 1st ed., 137–46. Oxford University PressNew York. https://doi.org/10.1093/0195145836.003.0011.
- Kolstad, Charles D., Thomas S. Ulen, and Gary V. Johnson. 2018. “Ex Post Liability for Harm vs. Ex Ante Safety Regulation: Substitutes or Complements?” In The Theory and Practice of Command and Control in Environmental Policy, by Peter Berck, edited by Gloria E. Helfand and Peter Berck, 1st ed., 331–44. Routledge. https://doi.org/10.4324/9781315197296-16.
- Lackey, J., & Sosa, E. 2006. The Epistemology of Testimony. Oxford: Oxford University Press.
- Lamb, William F, Miklós Antal, Katharina Bohnenberger, Lina I Brand-Correa, Finn Müller-Hansen, Michael Jakob, Jan C Minx, Kilian Raiser, Laurence Williams, and Benjamin K Sovacool. 2020. “What Are the Social Outcomes of Climate Policies? A Systematic Map and Review of the Ex-Post Literature.” Environmental Research Letters 15 (11): 113006. https://doi.org/10.1088/1748-9326/abc11f.
- Land, Kenneth C., Kurt Finsterbusch, John G. Grumm, and Stephen L. Wasby. 1982. “Ex Ante and Ex Post Assessment of the Social Consequences of Public Projects and Policies.” Contemporary Sociology 11 (5): 512. https://doi.org/10.2307/2068390.
- Levy, Neil. 2021. Bad Beliefs: Why They Happen to Good People. 1st ed. Oxford University PressOxford. https://doi.org/10.1093/oso/9780192895325.001.0001.
- Levy, Neil. 2022a. “Do Your Own Research!” Synthese 200 (5): 356. https://doi.org/10.1007/s11229-022-03793-w.
- Levy, Neil. 2022b. “In Trust We Trust: Epistemic Vigilance and Responsibility.” Social Epistemology 36 (3): 283–98. https://doi.org/10.1080/02691728.2022.2042420.
- Levy, Neil. 2023. “It’s Our Epistemic Environment, Not Our Attitude Toward Truth, That Matters.” Critical Review 35 (1–2): 94–111. https://doi.org/10.1080/08913811.2022.2149108.
- Loftus, E. F. 2003. “Make-believe memories,” The American Psychologist, 58/11: 867–73.
- Lyons, D. E., Young, A. G., & Keil, F. C. 2007. “The hidden structure of overimitation,” Proceedings of the National Academy of Sciences of the United States of America, 104/50: 19751–6.
- Malekpour, Shirin, Warren E. Walker, Fjalar J. De Haan, Niki Frantzeskaki, and Vincent A.W.J. Marchau. 2020. “Bridging Decision Making under Deep Uncertainty (DMDU) and Transition Management (TM) to Improve Strategic Planning for Sustainable Development.” Environmental Science & Policy 107 (May):158–67. https://doi.org/10.1016/j.envsci.2020.03.002.
- Mascaro, O., & Sperber, D. 2009. “The moral, epistemic, and mindreading components of children’s vigilance towards deception,” Cognition, 112/3: 367–80.
- Matheson, Jonathan. 2024. “Why Think for Yourself?” Episteme 21 (1): 320–38. https://doi.org/10.1017/epi.2021.49.
- McKenzie, C. R. M., Liersch, M. J., & Finkelstein, S. R. (2006). “Recommendations implicit in policy defaults,” Psychological Science, 17/5: 414–20.
- Meltzoff, A. N. (1988). “Infant imitation after a 1-week delay: Long-term memory for novel acts and multiple stimuli,” Developmental Psychology, 24/4: 470–6.
- Mercier, H., & Sperber, D. 2017. The Enigma of Reason: A New Theory of Human Understanding. London etc.: Allen Lane.
- Mergaert, Lut, and Rachel Minto. 2015. “Ex Ante and Ex Post Evaluations: Two Sides of the Same Coin?: The Case of Gender Mainstreaming in EU Research Policy.” European Journal of Risk Regulation 6 (1): 47–56. https://doi.org/10.1017/S1867299X0000427X.
- Milne, Frank, and H. M. Shefrin. 1988. “Ex Post Efficiency and Ex Post Welfare: Some Fundamental Considerations.” Economica 55 (217): 63. https://doi.org/10.2307/2554247.
- Moore, Alfred. 2017. Critical Elitism: Deliberation, Democracy, and the Problem of Expertise. 1st ed. Cambridge University Press. https://doi.org/10.1017/9781108159906.
- Nagell, K., Olguin, R. S., & Tomasello, M. (1993). “Processes of social learning in the tool use of chimpanzees (Pan troglodytes) and human children (Homo sapiens),” Journal of Comparative Psychology, 107/2: 174–86.
- Ove Hansson, Sven. 1996. “Decision Making Under Great Uncertainty.” Philosophy of the Social Sciences 26 (3): 369–86. https://doi.org/10.1177/004839319602600304.
- Pigden, Charles R. 2016. “Are Conspiracy Theorists Epistemically Vicious?” In A Companion to Applied Philosophy, edited by Kasper Lippert‐Rasmussen, Kimberley Brownlee, and David Coady, 1st ed., 120–32. Wiley. https://doi.org/10.1002/9781118869109.ch9.
- Putnam, H. 1975. “The meaning of ‘meaning’,” Minnesota Studies in the Philosophy of Science, 7: 131–93.
- Rabb, N., Fernbach, P. M., & Sloman, S. A. 2019. “Individual representation in a community of knowledge,” Trends in Cognitive Sciences, 23/10: 891–902.
- Rayner, K. 1998. “Eye movements in reading and information processing: 20 years of research,” Psychological Bulletin, 124/3: 372–422.
- Richerson, P. J., & Boyd, R. 2008. Not By Genes Alone: How Culture Transformed Human Evolution. Chicago: University of Chicago Press.
- Rieznik, A., Moscovich, L., Frieiro, A., Figini, J., Catalano, R., Garrido, J. M., Heduan, F. Á., et al. 2017. “A massive experiment on choice blindness in political decisions: Confidence, confabulation, and unconscious detection of self-deception,” PLOS ONE, 12/2: e0171108. DOI: 10.1371/journal.pone.0171108
- Rolin, Kristina H. “Objectivity, Trust and Social Responsibility.” Synthese 199, no. 1–2 (December 2021): 513–33. https://doi.org/10.1007/s11229-020-02669-1.
- Rolin, Kristina H. 2024. “Earning Epistemic Trustworthiness: An Impact Assessment Model.” Synthese 203 (2): 39. https://doi.org/10.1007/s11229-023-04472-0.
- Samset, Knut, and Tom Christensen. 2017. “Ex Ante Project Evaluation and the Complexity of Early Decision-Making.” Public Organization Review 17 (1): 1–17. https://doi.org/10.1007/s11115-015-0326-y.
- Schacter, D. L. 1996. Searching For Memory: The Brain, the Mind, and the Past. New York: Basic Books.
- Schaik, C., Ancrenaz, M., Borgen, G., Galdikas, B., Knott, C., Singleton, I., Suzuki, A., et al. 2003. “Orangutan cultures and the evolution of material culture,” Science (New York, N.Y.), 299: 102–5.
- Schwitzgebel, E. 2009. “Knowing your own beliefs,” Canadian Journal of Philosophy, 39/sup1: 41–62.
- Shea, N. 2009. “Imitation as an inheritance system,” Philosophical Transactions of the Royal Society B: Biological Sciences, 364/1528: 2429–43.
- Silvestre, Joaquim. 2002. “Discrepancies between Ex Ante and Ex Post Efficiency under Identical Subjective Probabilities.” Economic Theory 20 (2): 413–25. https://doi.org/10.1007/s001990100196.
- Simons, D. J., & Chabris, C. F. 1999. “Gorillas in our midst: sustained inattentional blindness for dynamic events,” Perception, 28/9: 1059–74.
- Simons, Daniel J., & Levin, D. T. 1997. “Change blindness,” Trends in Cognitive Sciences, 1/7: 261–7.
- Simons, Daniel J., & Levin, D. T. 1998. “Failure to detect changes to people during a real-world interaction,” Psychonomic Bulletin & Review, 5/4: 644–9.
- Smismans, Stijn. 2015. “Policy Evaluation in the EU: The Challenges of Linking Ex Ante and Ex Post Appraisal.” European Journal of Risk Regulation 6 (1): 6–26. https://doi.org/10.1017/S1867299X00004244.
- Sperber, D., Clément, F., Heintz, C., Mascaro, O., Mercier, H., Origgi, G., & Wilson, D. 2010. “Epistemic vigilance,” Mind & Language, 25/4: 359–93.
- Sunstein, C. R. 2006. Infotopia: How Many Minds Produce Knowledge. Oxford; New York: Oxford University Press.
- Surowiecki, J. 2004. The Wisdom of Crowds: Why the Many Are Smarter Than the Few. New York: Doubleday.
- Tennie, C., Call, J., and Tomasello, M. 2009. “Ratcheting up the ratchet: On the evolution of cumulative culture,” Philosophical Transactions of the Royal Society B: Biological Sciences, 364/1528: 2405–15.
- Wagenknecht, Susann. 2015. “Facing the Incompleteness of Epistemic Trust: Managing Dependence in Scientific Practice.” Social Epistemology 29 (2): 160–84. https://doi.org/10.1080/02691728.2013.794872.
- Weaver, K., Garcia, S. M., Schwarz, N., & Miller, D. T. 2007. “Inferring the popularity of an opinion from its familiarity: A repetitive voice can sound like a chorus,” Journal of Personality and Social Psychology, 92/5: 821–33.
Of course, it is not necessary for us, given our goals, to learn all the knowledge that exists. But it remains true that knowledge is power, and the ability to acquire and process more knowledge is decisively advantageous for whatever goals we may have, and this implies more knowledge than we can individually handle. ↩︎
In turn, the sparseness of these internal representations, and our dependence on the external world for their further specification, make them easily uprooted, or in Levy's words, shallow. Shallow beliefs explains why people can be so inconsistent, changing their minds radically over a short period of time. People's beliefs are mostly determined by cues from the external world, especially social cues. When the social cues change, so do their beliefs. One of Levy's main arguments in Bad Beliefs is that shallow beliefs explain how so many seemingly rational people can hold untrue beliefs: they respond rationally to social cues, but the wrong cues― i.e. a case of bad inputs.↩︎
It is the inclusive conjunction of these three features, and not any one of them alone, that triggers the default to overimitation in humans.↩︎
One can also try to figure out the reasons for oneself, but again, some cultural practices were innovated over multiple generations, making this impossible for any single individual to reconstruct on their own.↩︎
For example, people may have found things that worked through trial and error, but they did not understand why or how they worked.↩︎
Whether overimitation came first and led to deeply social knowledge, or deeply social knowledge came first and made overimitation necessary and hence selected for it, does not matter for the purposes of our discussion. That human cumulative culture has in fact, for most of our history, exceeded the cognitive limits of individuals explains the human propensity to overimitate. Overimitation is necessary for acquiring knowledge beyond individuals' cognitive limits. Since human cumulative culture has in fact exceeded individuals' cognitive limits for most of our history, overimitation was a necessary adaptation for us. This makes overimitation plausibly an adaptation to cumulative culture, which in the human case has mostly been deeply social. Overimitation was a necessary behavioral adaptation to the deeply social cumulative culture of humans.↩︎
There are also many recorded instances of explorers dying without the help of indigenous peoples and their cultural knowledge (Burcham 2008). The explorers could not figure out for themselves from scratch how to survive in that particular environment. The natives on the other hand thrived, supporting the hypothesis that they must have possessed knowledge which the explorers couldn’t acquire within their lifetime, namely knowledge accumulated across generations.↩︎
It is this gap in knowledge that makes trust and deference necessary.↩︎
In principle, people can acquire others' expertise given enough time, resources and effort. But the fact is that humans specialize to such depths that acquiring new fields of expertise is basically impossible in practice. Some people have indeed acquired additional fields of expertise, but even then, they can only acquire a few additional fields. No one can become a genuine expert in all fields. ↩︎
Not to mention collective action requires various individuals agree to the background knowledge of the policy, an agreement achieved by personally knowing it or simply deferring to it. The more complex this background knowledge is, the more trust and deference become necessary. A restriction to personal knowledge thereby restricts the complexity and comprehensiveness of potential policies for collective action. By relying on others, we as a group are putting far more knowledge to use than each individual can handle. ↩︎
It is a hen-and-egg problem which came first, the propensity to overimitate or deeply social complex knowledge, see footnote 6. But once overimitation set in, it started the self-reinforcing loop described here.↩︎
Significant beliefs, that is. I am assuming individuals want to maximize truth among those beliefs necessary for them to make well-informed decisions.↩︎
Levy has argued for more deference with many different arguments throughout multiple works. I just focus on the evolutionary argument here.↩︎
That is, he recommends one engage in social referencing. Relevant higher-order evidence includes signs of competence (e.g. good credentials), honesty (e.g. no conflicts of interest) and social responsibility. People should also defer to those experts that share their values, since scientific assessments and recommendations are not value-neutral (Douglas 2000).↩︎
For example, he writes in a footnote: “The literature on inductive risk (building on Douglas, 2000) emphasises the extent to which all factual enquiries are value-laden: at minimum, researchers must make decisions about how to weigh the risks of false positives against those of false negatives, and these decisions reflect their values. Elsewhere, I’ve suggested that this gives us a reason to prefer the testimony of experts who seem to share our values to experts with different values (Levy 2019), but ascertaining the values embedded in scientific findings is difficult. The reviewer suggests that knowing about inductive risk should make us less deferential. It is surely correct that it gives us a reason to be less confident in scientific findings, but it remains the case that we are unlikely to do better by doing our own research; at best, we will tend to do no worse.” (Levy 2022a, 7, emphasis added) The last statement is not self-evident to me, and in my opinion, Levy does not argue enough for it. I glean from his works that the reason he is moved towards this conclusion is because he places the greatest importance on the accuracy differential between laypersons and experts in the knowledge acquisition stage, deeming this to be the most influential factor in determining epistemic success.↩︎