Category Archives: Culture

Outline of A Future Science of Humanity

Liah Greenfeld

Humanity as a symbolic phenomenon

The possible emergence of a new intellectual center of the world in East and South Asia, offsetting and eventually nullifying vested interests in the United States [see Social Science from the Turn of the 20th Century], could create the conditions necessary for the rise of a science of humanity, one that would be capable of progressively accumulating objective knowledge of its subject matter. Intellectually, the first step in that direction would be to identify the quality that distinguishes humanity from the subject matter of biology, defining humanity as an ontological category in its own right. Comparative zoology provides the empirical basis for such an identification. Comparing human beings with other animals immediately highlights the astonishing variability and diversity of human societies and human ways of life (what humans actually do in their roles as parents, workers, citizens, and so on) and the relative uniformity of animal societies, even among the most social and intelligent animals, such as wolves, lions, dolphins, and primates. Keeping in mind the minuscule quantitative difference between the genome of Homo sapiens and that of chimpanzees (barely more than 1 percent), it is clear that the enormous difference in variability of ways of life cannot be accounted for genetically—that is, in terms of biological evolution. Instead, it is explained by the fact that, while all other animals transmit their ways of life, or social orders, primarily genetically, humans transmit their ways of life primarily symbolically, through traditions of various kinds and, above all, through language. It is the symbolic transmission of human ways of life (both the symbolic transmission itself and the human ways of life that are necessarily so transmitted) to which the term “culture” implicitly refers. Culture in this sense qualitatively—and radically—separates human beings from the rest of the biological animal kingdom.

This empirical evidence of human distinctiveness shows that humanity is more than just a form of life—i.e., a biological species. It represents a reality of its own, nonorganic kind, justifying the existence of an autonomous science. The justification is provided not by the existence as such of society among humans but by the symbolic manner in which human societies are transmitted and regulated. Stating the point explicitly in this way shifts the focus of inquiry from social structures—the general focus of social sciences—to symbolic processes and opens up a completely new research program, in its significance analogous to the one that Darwin established for biology. Humanity is essentially a symbolic—i.e., cultural, rather than social—phenomenon.

When the science of humanity at last comes into being, it will make use of the information collected in the social sciences but will not be a social science itself. Its subject matter, whichever aspects of human life it explores, will be the symbolic process on its multiple levels—the individual level of the mind and the collective levels of institutions, nations, and civilizations (see below Institutions, nations, and civilizations)—and the multitude of specific processes of which it consists. The science of humanity will be the science of culture, and its subdisciplines will be cultural sciences.

In contrast to the current social sciences, but like biology and physics, the science of humanity will have an inherent general standard for assessing particular claims and theories. As an autonomous reality, humanity is necessarily irreducible to the laws operating within the organic reality of life and to the laws operating within the physical reality of matter. It nevertheless exists within the boundary conditions of those laws—i.e., within the (organic and physical) reality created by the operation of those laws. Consequently, it is impossible without those boundary conditions. All the regularities of autonomous phenomena existing within the boundary conditions of other phenomena of a different nature (i.e., organic regularities existing within the boundary conditions of matter and cultural regularities existing within the boundary conditions of life) must be logically consistent with the laws operating within those boundary conditions. Therefore, every regularity postulated about humanity—every generalization, every theory—beginning with the definition of its distinctiveness, must entail mechanisms that relate that regularity to the human animal organism—mechanisms of translation or mapping onto the organic world. Indeed, the recognition that humanity is a symbolic reality implies such mechanisms, which connect every regularity in that reality to human biological organisms through the mind—the symbolic process supported by the individual brain.

The postulation of the mind and other distinguishing characteristics of humanity follows directly from the recognition of humanity as a symbolic reality, because such characteristics are logically implied in the nature of symbols. Symbols are arbitrary signs: the meanings they convey are defined by the contexts in which they are used. Every context changes with the addition of every new symbol to it—which is to say, every context changes constantly. Every present meaning depends on the context immediately preceding it and conditions the contexts and meanings following it, the changes thus occurring in time. That fact means that symbolic reality is a temporal phenomenon—a process. (It must always be remembered that the concept of structure in discourse about culture can only be a metaphor; nothing stands still in culture—it is essentially historical, in other words.) The symbolic process—that is, the constant assignment and reassignment of meanings to symbols (their interpretation)—happens in the mind, which is implicitly recognized as distinct from the brain (or from whatever other physical organ it may be associated with) in languages in which “mind” is a concept. The mind, supported by and in contrast to the brain, is itself a process—analogous, for instance, to the physical processes of digestion, happening to food in the stomach, or breathing, happening to air in the lungs. More specifically, it is the processing of symbolic stimuli—culture—in the brain. That fact makes culture both a historical and a mental phenomenon. In the science of humanity, moreover, it necessitates a perennial focus on the individual (methodological individualism, indeed already recommended by Weber), the individual being defined as a culturally constituted being and the mind being seen as individualized culture (“culture in the brain”). It also precludes the reification of social structures of whatever kind, be they classes, races, states, or markets. Although the mind is the creative element in culture (the symbolic process in general and the specific processes of which it consists on the collective level), its creativity is necessarily oriented by cultural stimuli operating on it from the outside. The symbolic process, just like the organic process of life, takes place on the individual and the collective levels at once, involving both continuity and contingency. Like genetic mutations in the process of life, change is always a possibility, but its nature (and thus the direction of evolution in the case of life and the direction of history in the case of humanity) can never be predicted.

Identity, will, and the thinking self

From the nature of symbols and symbolic processes one can also formulate hypotheses regarding the inner structure or anatomy of the mind, which can then be methodically tested against empirical evidence—historical, psychological, psychiatric, and even neuroscientific. The variability of human social orders, which is a function of the fact that human ways of life are constituted and transmitted symbolically rather than genetically, implies that, in contrast to all other animals, who are born into a specific ordered world, clearly organized by their genes, human beings are born into a world with numerous, potentially mutually exclusive, possibilities, and very early on in life (from early childhood) they must be able to adapt themselves to the possibilities that happen to be realized around them. Not being genetically equipped for any particular possibility, humans, in the first years of their lives, must grow adaptive mechanisms for focusing on such possibilities. Those mechanisms are the constituent processes of the mind.

Two of those processes can be logically deduced from the essentially indeterminate (arbitrary, potentially variable) nature of human social orders: identity and will. No other animal (with the exception of pets, whose world is the same as their human companions and is thus, by definition, also cultural) has a need for identity and will: their positions vis-à-vis other members of their group and their actions under all likely circumstances—that is, the circumstances of the species’ adaptive niche—are genetically dictated. Being genetically unique, each animal has individuality, but only human individual character has (and is mostly a reflection of) this adaptive subjective dimension. Identity and will constitute functional requirements of the individual’s adaptation to the indeterminate cultural environment. They represent the different aspects of the self, or “I”—identity being a relationally-constituted self and will being the acting self, or agency.

Identity may be understood as symbolic self-definition: the image of one’s position in a sociocultural “space” within a larger image of the relevant sociocultural terrain. The larger image is an individualized microcosm of the particular culture in which one is immersed, a mental map of the variable aspects of the sociocultural environment, analogous to representations of the changing spatial environment yielded by place cells, discovered in neurological experiments with rodents. Like the indication of a rat’s place on the spatial mental map, the human identity map defines the individual’s possibilities of adaptation to the sociocultural environment. Because that environment is so complex, however, the human individual, unlike a rat, is presented in the map with various possibilities of adaptation, which cannot be objectively and clearly ranked. They must be ranked subjectively—i.e., the individual must choose or decide which of them to pursue. This subjective ranking of options is a function of the general character of the mental map (for instance, what place on it is occupied by God and the afterlife, or by the nation, or by one’s favorite sports team, etc.) and where one is placed on it in relation to such other presences.

While identity serves as a representation (and agent) of a particular culture (the culture in which the individual is immersed), will is a function of the symbolic process in general—i.e., it reflects the intentionality of symbols. Human actions (except involuntary reflexes) are not determined reactions but products of decision and choice. The nature of the human response to any stimulus is indeterminate: it is the will that steps in, as it were, in a split-second intermediate stage between stimulus and reaction, deciding in that moment what the response will be. The word “consciousness” is frequently applied to these moments of decision, but, unless rendered problematic by special circumstances, both identity and will are largely unconscious processes in the sense that humans very rarely think about or become consciously aware of them.

Given the character of the human environment, the logical reasons for the existence of identity and will are rather obvious: both “structures” are necessary for the individual’s adaptation to that environment and, therefore, for the individual’s survival. Discoverable only logically, they remain hypothetical until tested against empirical evidence. This is not so as regards the thinking component of the mind—the thinking “I,” or the “I” of self-consciousness (which can also be called “the ‘I’ of Descartes,” because it is to that notion that Descartes referred in his famous dictum, cogito, ergo sum. Each person is aware of a thinking “I.” Its existence is known directly through experience—in other words, empirically. This knowledge is absolute, or certain, in the sense that it is impossible to doubt. It is, in fact, the only certain knowledge available to human beings. The thinking “I” is not necessary for the individual’s adaptation to the sociocultural environment and to his or her survival in it, but human existence in general would be impossible without it. It is a necessary condition for the culture process on the collective level. As the “I” of self-consciousness, the thinking “I” makes possible self-consciousness for any individual human; as the process of self-conscious thought, the one explicitly symbolic process among all symbolic mental processes, it makes possible indirect learning and thereby the transmission of human ways of life across generations and distances. It is not just a process informed and directed by our symbolic environment, but an essentially symbolic process, similar to the development of language, musical tradition, elaboration of a theorem—and to the transmission of culture, in general—in the sense that it actually operates with formal symbols, the formal media of symbolic expression. This is the reason for the dependence of thought on language, which has been frequently noted. Thought extends only as far as the possibilities of the formal symbolic medium in which it operates.

How can one test the anatomy of the mind, most of which is discoverable only through logical deduction? As in medicine, malfunction provides an excellent empirical test. Under normal conditions, the three “structures” of the mind are perfectly integrated, but in cases of mental illness integrated minds disintegrate into the three components, each of which can then be observed in its specific malfunction. This is particularly clear in the case of functional mental disease of unknown organic basis, such as depressive disorders (unipolar or bipolar) and schizophrenia—which in fact are generally identified by clinicians with the loss of aspects of the self or its complete disintegration. Depressive disorders, for example, specifically affect the will: depressed patients lose motivation, sometimes to such an extent that they find it difficult to get out of bed or to do the simplest things. In the manic stage of manic-depressive disorder (bipolar disorder), patients lose control of themselves altogether, being unable to will themselves to act or to stop acting, in retrospect explaining that they “lost their mind” or that the person who acted or did not act “wasn’t me.” The impairment of the will in bipolar disorder entails self-loathing (in the case of depression) and extremely high self-confidence (during acute mania)—i.e., an uncertain, oscillating sense of identity. Both depressive disorders and schizophrenia express themselves in delusions, or beliefs that one is what one definitely is not. Accordingly, both the overall nature of one’s mental map and one’s place on it radically change. In schizophrenia in particular, the thinking “I” completely separates from the mind, and patients experience their own thoughts as implanted from outside and their self-consciousness as being watched or observed by someone else. At the same time, their thinking (which they experience as alien) faithfully reflects the tropes and commonplaces of their cultural environment.

Certain subdisciplines of the science of humanity will make the cultural process on the individual level of the mind their special subject. One possible branch, analogous to cellular biology, might study the interrelations between different symbolic components of the human mental process. Another, analogous to biochemistry or biophysics, might study the interrelations between the symbolic and the organic components of the mental process—that is, the interrelations between the mind and the brain. The formation, transmission, changes, and pathologies of identity, will, and the thinking self will be central subjects in these subdisciplines, which will necessarily inform, and be informed by, the study of the cultural process on the collective level, just as cellular biology, biochemistry, and biophysics are interconnected with the focused study of particular forms of life, from kingdoms to species (e.g., entomology, primatology) and with subdisciplines such as genetics, ecology, and evolutionary biology, which focus on macro-level life processes.

Institutions, nations, and civilizations

Knowledge accumulated (and left uninterpreted) in the course of the history of the social sciences—specifically, knowledge that amounts to comparative history—when examined from the perspective of the science of humanity and in light of the recognition of the symbolic and mental nature of the subject, allows one to identify several layers of the cultural process on the collective level. Those layers can be distinguished analytically, though not empirically, given that all cultural processes are happening simultaneously in several of these layers in various combinations, which in every particular case are subject to empirical investigation.

There are three autonomous layers. In order of increasing generality they are: (1) the layer of social institutions, or established “ways of thinking and acting” (as Durkheim defined them) in the various spheres of social life, such as economy, family, politics, and so on; (2) the layer of nations (in the past, mostly religions), understood as functionally-integrated, geopolitically bounded systems of social institutions; and (3) the layer of civilizations, the most durable and causally significant of the three layers. Civilizations are family sets of autonomous systems, sharing the same (civilizational) first principles (e.g., monotheism and logic) and, although not systematically related to each other, interdependent in their development. The mind is the active element in the collective cultural process at all layers, constantly involved in their perpetuation and change while being constantly affected, constrained, and stimulated by them. Civilizations constitute the independent and thus the fundamental layer of the cultural process on the collective level, in the sense of depending on no other cultural process on that level but only on the mind in their origins. They are a framework subsuming all the others and subsumed in none, causally significant in every layer below and—together with mind—ultimately responsible for cultural diversity in the world.

The only concept from the social sciences that can be appropriated and built upon within the science of humanity is Durkheim’s concept of anomie, which implicates the psychological mechanisms that connect cause and effect in any particular case (connecting the mind and culture in one process) and therefore lends itself easily to investigation by empirical evidence. Anomie refers to a condition of systemic inconsistency among collective representations, directly affecting individual experience and creating profound psychological discomfort. The discomfort motivates participants in the situation in question to resolve the bothersome inconsistency. Thus the concept encompasses the most generally applicable theory of sociocultural change—a change in identity, which leads to changes in established ways of thinking and acting within more or less extended areas of experience.

Applications of the science of humanity: nationalism, economic growth, and mental illness

This minimal exposition of the ground principles of the science of humanity already provides a sufficient basis for raising and answering, logically and empirically, questions regarding phenomena that the current social sciences are capable of approaching, if at all, only speculatively. As examples, one can focus on three such phenomena that have been at the center of public discussion since at least the late 19th century: nationalism, economic growth, and functional mental illness. The amount of information collected about them is enormous; all three have been subjects of voluminous descriptive and “theoretical” (speculative) literature. Yet, this literature has not been able to explain them, failing to answer the fundamental question of what causes these phenomena, or why they exist. The practical effects of this inability to understand the forces controlling human life cannot be exaggerated.

Within the framework of the science of humanity, one would approach nationalism, economic growth, and functional mental illness without any preconceptions other than that they are symbolic, by definition historical phenomena—i.e., products of new symbolic contexts, created by the reinterpretation of certain collective representations by certain minds at certain specific moments in the cultural process. The first step would be to establish when and where—in what circumstances—these moments occurred. An appearance of new vocabularies (to explicitly record new experiences and transmit new meanings) is by far the best, though not the only, indicator. In the case of nationalism, the name itself orients research toward European languages. Their examination before the concept enters broad circulation—that is, beginning in the 18th century and moving backward—reveals that the concept of the nation as generally understood today—as the people to which one belongs, from which one derives one’s essential identity, and to which one owes allegiance—first appeared in the early 16th century in England, signaling a dramatic change in the meanings of the words nation and people. Before that time, nation referred to exceedingly small groups of very highly placed individuals, representatives of temporal and ecclesiastical rulers at church councils, each such group a tiny elite making decisions determining the collective fates of large populations, and people denoted the overwhelming majorities within those populations—i.e., their common, or lower, classes, the “rabble” or plebs. Whereas membership in the conciliar nation communicated a sense of great power and dignity, there was none in being one of the people; membership in a people meant being a nobody. This distinction existed within the context of the European feudal “society of orders,” which divided the population of every Christian principality into separate categories of humanity, as different from each other as species of animals are. Indeed, they were thought to differ even in the nature of their blood (which could not be mixed): the small upper military order of the nobility (comprising 2 to 4 percent of the population) was believed to have blue blood, while the huge lower order of the people was believed to have red blood.

In the second half of the 15th century, however, protracted conflict between the two branches of the English royal family, known as the Wars of the Roses, actually destroyed the blue-blooded upper order. A new (in fact plebeian) family assumed the crown; the new king needed help from the new aristocracy to carry out his rule; a period of mass, generally upward, mobility began; and enterprising individuals who knew that their blood was “red” found themselves occupying positions which formerly could be occupied only by those whose blood was “blue.” Their experience was positive but not understandable to them. Attempting to explain it to themselves and to make it seem legitimate, they stumbled upon the paradoxical but extremely appealing idea that the English people themselves were a nation. The equation of the two concepts, people and nation, symbolically elevated the masses, making all of the English equal. Their identity—the place of each individual on his or her mental map of the sociocultural terrain—was transformed as it became the dignified national identity that is inclusively granted to members of a sovereign community of fundamentally equal members.

Schematically, the circumstances in which nationalism emerged can be described as follows: the personal experience of a significant number of well-placed (influential) individuals contradicts existing collective representations, resulting in an irritating anomic situation; because the experience is positive, these individuals reinterpret collective representations in a way that makes it normal (understandable and legitimate); the image of reality and personal identity change to reflect this reinterpretation, establishing different ways of thinking and, therefore, acting in the society at large. The change in identity and the image of social reality in the first place affects status arrangements (i.e., the organization of social positions, the system of social stratification): nationalism creates a polity-wide community of equals, making individuals interchangeable and mobility between strata possible, expected, and ultimately dependent on individual choice (making one free) and effort. This, in turn, changes the nature of political institutions. Defined as the decision-making elite (nation), the entire community must now be represented in the government: the impersonal state, as the abstraction of popular sovereignty, replaces the personal government of kings. Other specific institutions are similarly affected. Eventually, the dignity implied in nationalism brings to it new converts, and national consciousness spreads first to England’s colonies and neighbors and then farther and farther around the world.

The growing influence of England and then Great Britain, which rapidly emerged as the preeminent European power carefully watched everywhere, was an important factor in the attention nationalism initially attracted, and England’s own precocious nationalism was the reason why the country’s influence grew. Nationalism is an inherently competitive form of consciousness. National membership endows with dignity the personal identity of every national, making national populations deeply invested in the dignity of the nation as a whole, or its standing among other nations (into which the national image of reality from the moment of its emergence divides the world). Standing among others is always relative and cannot be achieved once and for all. Nations are impelled to compete for dignity—prestige, respect of others—constantly. They choose to compete for it in those areas which offer them the best chances to end up on top: Russia, for instance, from the outset of its existence as a nation in the 18th century staked its national dignity on military strength, adding to it, when the time was right, the splendor of its high culture (science, literature, ballet, and so on) but never competing in the economic arena. England, the first nation, became ardently competitive when it faced no challengers, having its pick of competitive arenas. Answering the need to justify the personal experience of upward mobility, English nationalism prioritized the individual, and it was natural for England to challenge the world to economic competition, which directly involved the great majority of its people. Nationalist competitiveness—a race whose finish line is ever-receding, because the prize is a nation’s standing relative to others—drove the classes engaged in economic activity to produce a new, modern economy, the one since called “capitalist,” which differed drastically from the traditional economies that had existed everywhere before nationalism. Whereas traditional economies were oriented toward subsistence, nationalism reoriented the English and then other economies toward growth. With economic performance the basis of international prestige, nations opting for competition in the economic arena cannot afford to stop growing, whatever the costs—political, psychological, or other. This explains another central dimension of modern life, which has preoccupied social thinkers for at least 250 years and which the social sciences have never been able to account for, and thus regard as “natural”: economic growth, and specifically the reorientation of national economies toward economic growth beginning in the late 16th century.

The reorientation of the English economy (the first to reorient) toward growth occurred within decades of the emergence of national identity and consciousness. Another phenomenon that closely accompanied that cultural (symbolic and mental) change was the noticeable rise in rates of functional mental illnesses, which would eventually be identified as schizophrenia and affective disorders. Although individual cases of such illnesses had been recorded well before the 15th century (indeed as far back as the Bible and ancient Greece), with the rise of nationalism they became a public-health and social problem of the first order. Other societies that acquired national identity and consciousness after England also experienced sharply increased incidences of such illnesses, which continued to rise as nationalism spread in them, reaching epidemic proportions in some countries (e.g., the United States).

For more than 200 years, psychiatry, which emerged in response to this problem, has attempted to combat functional mental disease, which nevertheless remains unexplained and, as a result, incurable (though their symptoms can sometimes be alleviated through medication or therapeutic intervention). Considered in the framework of the science of humanity as outlined above, however, its causes become clear. Nationalism necessarily affects the formation of individual identity. A member of a nation can no longer learn who or what he or she is from the environment, as would an individual growing up in an essentially religious and rigidly stratified, nonegalitarian order, in which each person’s position and behavior are defined by birth and (supposedly) divine providence. Beyond the very general category of nationality (national identity), a modern individual must decide what he or she is and should do and, on that basis, construct his or her own personal identity. Schizophrenia and depressive (unipolar and bipolar) illnesses are caused specifically by the values of equality and freedom as self-realization, which make every individual his or her own maker. The rates of such mental diseases increase in accordance with the extent to which a particular society is devoted to these values—inherent in the nationalist image of reality (i.e., in the national consciousness)—and the scope of freedom of choice within it. Conflicting collective representations do not allow for the construction of a meaningful mental map, and blurred or nonexistent identity impairs the will and dissolves the self, destroying the mind as individualized culture and leaving the individual to experience his or her thinking “I,” untethered to identity and will, as an alien presence.

The various historical connections between different layers of the cultural process come into sharper focus when one considers the spread of nationalism into Japan and China—that is, beyond the family of cultures, all embedded in monotheism, in which nationalism emerged. Nationalism was introduced in Japan by the Western powers who bent the small country to their will by their show of military strength in 1853, deeply humiliating its elites. Recognizing the implications of nationalism for collective dignity, these elites convert to the new consciousness, the country became extremely competitive, and within a few decades it emerged as a formidable military and economic power. The humiliation of China’s defeat in the First Sino-Japanese War (1894–95) was the reason for the birth of Chinese nationalism; Chinese elites also adopted it in an effort to restore the dignity of their empire. The colossal Chinese population remained unengaged until the ideological turn initiated by Deng Xiaoping (1904–97) connected national dignity to economic performance, thereby dignifying the population’s main activity. Accordingly, both nationalism and capitalism (understood as an economic system oriented toward growth) spread in Japan and China. But, unlike monotheistic civilizations—in which, by definition, reality is imagined as a consistently ordered universe and which, therefore, place great value on logical consistency—cultures (and minds) within the Sinic civilization (all cultures rooted in China) are not bothered by contradictions. As a result, conflicting collective representations (anomie), which are implicit in the freedom and equality implied by nationalism, do not have there the disorienting psychological effects that they have in societies embedded in monotheism. Remarkably, East Asian societies, as epidemiologists have repeatedly stressed, remain largely immune from functional mental illness.

The prospect of a science of humanity, like the pursuit of objective knowledge through the method of conjecture and refutation about any aspect of empirical reality, holds great promise. But it can develop only in conditions that would allow for its institutionalization. Although such conditions do not exist today, they may yet exist in the future.

Social Science from the Turn of the 20th Century

Liah Greenfeld

Science and social science

It is impossible to understand, much less to assess, the social sciences without first understanding what, in general, science is. The word itself conveys little. As late as the 18th century, science was used as a near synonym of art, both meaning any kind of knowledge—though the sciences and the arts could perhaps be distinguished by the former’s greater abstraction from reality. Art in this sense designated practical knowledge of how to do something—as in the “art of love” or the “art of politics”—and science meant theoretical knowledge of that same thing—as in the “science of love” or the “science of politics.”  But, after the rise of modern physics in the 17th century, particularly in the English-speaking world, the connotation of science changed drastically. Today, occupying on the knowledge continuum the pole opposite that of art (which is conceived as subjective, living in worlds of its own creation), science, considered as a body of knowledge of the empirical world (which it accurately reflects), is generally understood to be uniquely reliable, objective, and authoritative. The change in the meaning of the term reflected the emergence of science as a new social institution—i.e., an established way of thinking and acting in a particular sphere of life—that was organized in such a way that it could consistently produce this type of knowledge.

Also called “modern science”—to distinguish it from sporadic attempts to produce objective knowledge of empirical reality in the past—the institution of science is oriented toward the understanding of empirical reality. That institution presupposes not only that the world of experience is ordered and that its order is knowable but also that the order is worth understanding in its own right. When, as in the European Middle Ages, God was conceived as the only reality worth knowing, there was no place for a consistent effort to understand the empirical world. The emergence of the institution of science, therefore, was predicated on the reevaluation of the mundane vis-à-vis the transcendental. In England the perceived importance of the empirical world rose tremendously with the replacement of the religious consciousness of the feudal society of orders by an essentially secular national consciousness following the 15th-century Wars of the Roses. Within a century of redefining itself as a nation, England placed the combined forces of royal patronage and social prestige behind the systematic investigation of empirical reality, thereby making the institution of science a magnet for intellectual talent.

The goal of understanding the empirical world as it is prescribed a method for its gradual achievement. Eventually called the method of conjecture and refutation, or the scientific method, it consisted of the development of hypotheses, formulated logically to allow for their refutation by empirical evidence, and the attempt to find such evidence. The scientific method became the foundation of the normative structure of science. Its systematic application made for the constant supersession of contradicted and refuted hypotheses by better ones—whose sphere of consistency with the evidence (their truth content) was accordingly greater—and for the production of knowledge that was ever deeper and more reliable. In contrast to all other areas of intellectual endeavor (and despite occasional deviations) scientific knowledge has exhibited sustained growth. Progress of that kind is not simply a desideratum: it is an actual—and distinguishing—characteristic of science.

There was no progressive development of objective knowledge of empirical reality before the 17th century—no science, in other words. In fact, there was no development of knowledge at all. Interest in questions that, after the 17th century, would be addressed by science (questions about why or how something is) was individual and passing, and answers to such questions took the form of speculations that corresponded to existing beliefs about reality rather than to empirical evidence. The formation of the institution of science, with its socially approved goal of systematic understanding of the empirical world, as well as its norms of conjecture and refutation, was the first, necessary, condition for the progressive accumulation of objective and reliable knowledge of empirical reality.

For the science of matter, physics, the institutionalization of science was also a sufficient condition. But the development of sciences of other aspects of reality—specifically of life and of humanity—was prevented for several more centuries by a philosophical belief, dominant in the West since the 5th century BCE, that reality has a dual nature, consisting partly of matter and partly of spirit.

The mental or spiritual dimension of reality, which for most of this long period was by far the more important, was empirically inaccessible. Accordingly, the emergence of modern physics in the 17th century led to the identification of the material with the empirical, the scientific, and later with the objective and the real. And this identification in turn caused anything nonmaterial to be perceived as ideal, outside the scope of scientific inquiry, subjective, and, eventually, altogether unreal.

That misconception of the nonmaterial placed the study of life and especially the study of humanity—both of whose subjects were undeniably real, though they also evidently contained nonmaterial dimensions—between the horns of a dilemma. Either those tremendously important aspects of reality could not be scientifically approached at all, or they needed to be reduced to their material dimensions, a project that was logically impossible. Both areas of study, consequently, were confined either to the mere collection and cataloging of information that could not be scientifically interpreted (in the case of the study of life, the assignment of “natural history”) or to the formulation of speculations that could not be empirically tested (so called “theory” as regards humanity). A progressive accumulation of objective knowledge regarding these aspects of empirical reality—a science of such aspects—was beyond reach.

Biology escaped this ontological trap in 1859 with the publication of Darwin’s On the Origin of Species. The theory of evolution by means of natural selection, operative throughout all of life and irreducible to any of the laws of physics (though operating within the conditions of those laws and therefore logically consistent with them), allowed life to be characterized as an autonomous reality, breaking through the blinders of psychophysical dualism and adding to reality at least one other colossal dimension: the organic. The realization that its subject matter was autonomous established the study of life as an independent field of scientific inquiry—the science of organic reality. Since then, biology has been progressing by leaps and bounds, building on past achievements and ever improving or replacing biological theories by better ones, able to withstand tests by more empirical evidence.

Social science in the research universities

Biology thus created a way to circumvent the dualist psychophysical ontology—the cognitive obstacle preventing the development of sciences other than the one focusing on material reality, physics—and made scientific activity and knowledge possible regarding nonmaterial empirical reality, which included humanity. The necessary and sufficient conditions for the development of a science of humanity were finally in place. Unfortunately, however, no accumulation of reliable objective knowledge about humanity followed. The reason for that failure was the institutionalization in the United States at the turn of the 20th century of the social sciences as academic disciplines within the newly formed research universities.

In the half-century after the American Civil War (1861–65), the United States rapidly became the most populous and the most prosperous society in the Western world. That prosperity created numerous opportunities for lucrative and prestigious academic careers in the country’s new university establishment, whose immediately robust bureaucracies and graduate departments for professional training were soon the model for other countries to follow. The bureaucratization and departmentalization within the research universities did not affect the development of the exact and natural sciences, which were then already on a firm footing and progressing apace, but it effectively prevented the formation of a science of humanity, erecting a series of obstacles on the way to the accumulation of objective knowledge of that core aspect of empirical reality, instead of facilitating the development of such a science (e.g., by protecting practicing scientists from the pressures of public opinion).

American research universities were generally the creation of two groups: post-Civil War business magnates, who appreciated the possibilities for revolutionizing industrial production opened up by advances in physics and biology and were eager to invest in the development of science; and elements of the East Coast gentry, the scions of old families who had formed the bulk of the colonial and pre-Civil War traditional cultural elite. The latter group was not intellectually sophisticated and was not much interested in the nature or history of science. Their central concern was the change in the traditional structure of American society that had been brought about by increasing immigration and in particular by the rise, from the less genteel strata of society, of a new business elite—the “new rich,” whom the cultural elite generally derided as “robber barons.” Worried that those changes threatened their own position in society, the traditional elite also believed that great wealth, unconnected to the style of life which had legitimated social status before the Civil War, created numerous social problems and was deleterious to society as a whole. In 1865 some prominent members of the traditional elite formed in Boston the American Association for the Promotion of Social Science (AAPSS), the goal of which, according to the organization’s constitution, was

to aid the development of social science, and to guide the public mind to the best practical means of promoting the amendment of laws, the advancement of education, the prevention and repression of crime, the reformation of criminals, and the progress of public morality, the adoption of sanitary regulations, and the diffusion of sound principles on questions of economy, trade, and finance.

The constitution further declared that the AAPSS

will give attention to pauperism, and the topics related thereto; including the responsibility of the well-endowed and successful, the wise and educated, the honest and respectable, for the failures of others. It will aim to bring together the various societies and individuals now interested in these objects, for the purpose of obtaining by discussion the real elements of truth; by which doubts are removed, conflicting opinions harmonized, and a common ground afforded for treating wisely the great social problems of the day.

Rhetorically, the declaration reasserted the authority of the traditional elite, which the rise of the independent business elite had largely undermined. Wisdom and education were equated with honesty and respectability, and wise and educated members of the AAPSS, it was implied, were already in possession of social science—they already knew, prior to any research, the sound principles upon which the great questions of economy, trade, finance, and the responsibilities of the business classes should be based. In that context, “social science” was not an open-ended process of accumulation of objective knowledge of empirical reality by means of logically formulated conjectures subject to refutation by contradictory evidence. Rather, it was a form of political advocacy, practiced and supported by those who considered themselves possessed of a special insight and capable of “obtaining by discussion the real elements of truth.” In other words, the “science” the AAPSS sought to foster was an ideology.

The preoccupations of social science so conceived, as indicated in the AAPSS constitution, ranged from “pork as an article of food” to the management of insane asylums. From the start, however, two areas dominated: “economy, trade, and finance”—including national debt, industrial relations, and related topics, reflecting the economic focus of the gentry’s social criticism—and education, including the “relative value of classical and scientific instruction in schools and colleges.” Here “scientific instruction” referred to instruction in the physical sciences (biology having barely begun), which was relatively new, while classical instruction was what the members of the traditional elite had received in their own schools and colleges. The latter form of education had lost some of its prestige as a result of the demonstrated success of the business magnates, most of whom had received no formal education at all. The elite’s insistence on the social importance of such (nonscientific) education was thus connected to its need to protect its status.

Within a year the AAPSS merged with the American Social Science Association (a subsidiary of the Massachusetts Board of Charities), also formed in 1865. The leading patrician reformers—the ASSA’s officers—included three future research-university presidents, who would play a major role in the creation of these new institutions. Social scientists capitalized on the uncultured businessmen’s interest in natural science and harnessed it to their specific status concerns: offering their cooperation in developing institutions for the promotion of science, they established themselves as authorities over how far the definition of science would reach. By the time of the founding of the first research university, Johns Hopkins, in 1876, it was thoroughly in the interests of those who identified themselves as social scientists to be generally recognized as members of the scientific profession, alongside physicists and biologists. In the wake of the Darwinian revolution in biology, the prestige of science among the educated classes had skyrocketed, quickly catching up with the respect commanded by religion and indeed leaving it behind. Science was emerging as the preeminent intellectual and even moral authority within American society, and it was only natural for social scientists (many of whom, incidentally, were clergymen) to wish to share in the authority it afforded.

That desire was evident in two developments that followed closely on the heels of the founding of Johns Hopkins: the division of “social science” into “disciplines” and efforts to model those disciplines on physics. The latter development helped to establish as virtually unquestionable the twin beliefs that (1) the basis of the scientific method, what made science objective, was quantification, and, accordingly, that (2) the degree of scientific legitimacy possessed by a discipline corresponded to the volume of quantitative text it produced (i.e., the extent to which quantitative symbols were used in its publications).

The first social science to be institutionalized as an academic discipline within the research universities was history—specifically, economic history. Many social scientists from patrician American families had spent time in German universities, in whose liberal arts faculties history had already emerged as a highly respectable profession; the first American university professors were thus encouraged to see themselves as historians. In its turn, the economic focus of the new historians reflected the old target of their social criticism. In 1884, only eight years after the founding of Johns Hopkins, American historians held their first annual convention, where they formed a professional organization, the American Historical Association (AHA). During the AHA’s meeting in 1885, some historians left the AHA to form the American Economic Association (AEA). Several years later, a group of the first American economists left the AEA to form the American Political Science Association (APSA). And in 1905, some of those political scientists, who had earlier identified as economists and before that considered themselves historians, quit the APSA to form the American Sociological Society (ASS), now called the American Sociological Association (ASA). Thus, by the very early 20th century, it could be said that an association of gentry activists and social critics, affiliated with a charitable organization, had spawned four academic disciplines, splitting social science into history, economics, political science, and sociology.

The relatively spontaneous fission of social science was different in character from specialization within physics and biology. Scientific specialization was prompted by developments in the understanding of the subject matter: anomalies in earlier theories contradicted by evidence, the raising of new questions, or the discovery of previously unknown causal factors. It accompanied the advancement of objective knowledge of empirical reality and contributed to its further progress. The break-up of “social science” into separate disciplines, in contrast, was driven not by scientific necessity but primarily by the desire of social scientists and research-university administrators to create additional career opportunities for themselves and their associates. Thus, in a manner of speaking, the cart was placed before the horse.

The first step in that scientifically backward process was the foundation of professional associations. The existence of professional associations ostensibly justified the establishment of university departments in which the declared but undefined professions would be practiced and new generations of professionals trained. Such associations, however, mostly contributed to bureaucratization and served vested interests, doing little to advance any genuine understanding of humanity. Two more professions with longer histories, anthropology and psychology (both of which were independent of social criticism and largely unconcerned with the threat to the status of traditional elites posed by the uncultured rich) were incorporated within academic social sciences during this formative period. In neither case did their incorporation accurately reflect their already developed professional identities, but it did not interfere with their intellectual agendas and was accepted.

The identities and agendas of the three disciplines that arose from history in the research university— economics, political science, and sociology—were to develop within that also nascent institutional environment, which, like them, was in large measure brought into being by the desire of the traditional elite to re-establish its political and cultural authority. That environment attracted to the new social sciences people actuated by three quite independent motives, which would be the source of persistent confusion regarding the identity and agenda of each of those disciplines. To begin with, the conviction of the original American social scientists that they, better than anyone else, knew how society should be organized—that they, as experts on questions of the general good and social justice, were wielders of moral authority and should be natural advisors to policy makers—persisted even after social science split into economics, political science, and sociology. The desire to be treated as the wielders of such authority, as natural leaders of society, was the first motive.

All three disciplines continued to attract people who were interested not so much in understanding reality but in changing it, to paraphrase Marx’s famous thesis. However, such authority no longer could be claimed on the basis of a genteel lifestyle: with science successfully competing with religion as the source of certain knowledge and even ultimate meaning, what was now required was being recognized as scientists. Accordingly, the emphasis in social science shifted from “social” to “science,” and, as noted above, the term was understood to mean “like physics (and biology)” rather than “any kind of knowledge.” The desire for the status of scientists, specifically, was the second independent motive that attracted people to the social sciences.

That motive was also the main reason behind the rise of the discipline of economics. Economics was explicitly modeled on physics (mainly in its use of quantification to express its ideas), reflecting the general ambition among would-be economists to hold with regard to society the position that physicists (and biologists) had held with regard to the natural world. Yet, social scientists knew exceedingly little about natural science and the nature of science beyond the fact that physics and biology were producing authoritative knowledge of their subject matters. They had a very limited understanding of what the authority of that knowledge was based on. As outside observers, it appeared to them (as it did to others) that scientific practice characteristically involved the use of numbers and algorithms—an esoteric language of expression. They concluded—in sharp contrast to the emerging humanistic discipline of philosophy of science, which focused on the scientific method of investigation and inference—that scientific knowledge was knowledge so expressed. Although efforts to quantify their subject matters were characteristic of all three of the newborn social sciences, economics went farthest in developing quantitative mannerisms and substituting the outward manner of formulating ideas for the method of arriving at them. As a means of establishing professional status, that practice again proved very effective: such mannerisms eventually made economics an exclusive domain, a kind of secret society with a language that nobody else understood, and established it as the queen of the social sciences, with commensurate political influence. For their part, both political science and sociology were also deeply preoccupied with their scientific status, and the quantitative methodologies and manners of expression they adopted were (and remain) valuable in maintaining it, though neither discipline has achieved the level of authority enjoyed by economics.

The cultivation of their scientific status allowed the new disciplines to view their histories as part of the history of science: the story of the progressive accumulation of objective knowledge of reality and the ever more accurate and complete understanding of causal interrelationships between its constituent elements. Just like physics and biology, it was subsequently believed, the social sciences continued and dramatically improved upon a long tradition of unsystematic (because not scientific) thought on their subjects. The persistence of that narrative—in the face of overwhelming contrary evidence—attracted to economics, political science, and sociology people actuated by a third motive: a genuine interest in understanding empirical human reality. Believing the social-science narrative, those students eagerly underwent whatever methodological training their mentors suggested and shrugged off the latter’s ideological views and related activist tendencies as personal matters. Such social-science idealists have been responsible for much worthy scholarship produced over the first century and a half of social science’s academic existence.

In the meantime, psychology—always insistent that, focusing on the individual, it was unlike the other social sciences—largely reverted to its roots in natural science, content to study the animal brain and to leave the riddle of the human mind to philosophers. The preoccupations of the other social sciences have been quite irrelevant to it. The discipline of history, almost immediately abandoned by those of its original members primarily interested in self-promotion, early opted out of the social sciences and joined the ranks of the humanities, on the whole practicing scholarship for its own sake rather than laying any claim to social authority. In anthropology, too, the authority of the profession and the question of whether it should be considered a science have mattered far less than in the three core disciplines of the social science family. Anthropologists have found sufficient satisfaction in doing fieldwork in settings that, while affecting them deeply, could hardly have any bearing on their standing within their own society.

As was true of natural history before the rise of biology, the disciplines of history and anthropology, along with exceptional sociologists, political scientists, and economists, have certainly added valuable information to the common stores of knowledge about humanity. But such information, not being organized according to the logic of science, cannot on its own spur the development of knowledge and, therefore, does not lead to progress in understanding. Science is essentially a collective, continuous enterprise, impossible without certain institutional conditions—very specific ways of thinking and acting—that are fundamentally different from those that currently exist in research universities, insofar as the subject of humanity is concerned. The contributions of those social-science disciplines and scholars can be likened to the insights of exceptional individuals, capturing one or another aspect of material or organic reality before the emergence of physics and biology: they do not build up. Their significance is limited to cultural and historical moments of public interest in the particular subjects they happen to treat.

Public interest changes with historical circumstances, causing the social sciences to switch directions: fashionable subjects and theories suddenly fall out of favor, and new ones just as quickly come into it, preventing any cumulative development. For example, from the 1940s through the 1980s, World War II and the Cold War made totalitarianism a major focus of political science and inspired in it the creation of the subdiscipline of Sovietology. The collapse of the Soviet Union in 1991 deprived both areas of study of their relevance to policy makers and forced hundreds of political scientists to seek new subjects to investigate, resulting in the new fields of nationalism studies, transition studies, democratization studies, and global studies, among others. Meanwhile, the discontent of many intellectuals with Western society, made legitimate by the Holocaust, shifted the ideology of social justice from preoccupation with economic structures (e.g., socioeconomic class) to preoccupation with identity (e.g., race, religion, gender, and sexual orientation), affecting, in particular, sociology. The discrediting of Marxism with the collapse of Soviet communism in Russia and eastern Europe reinforced this ideological reorientation: American (and then international) sociology became the science of “essentialist” inequalities (i.e., inequalities based on ascribed identities)—inequality now replacing the longtime staple of sociological research, stratification. As a science, sociology claimed the authority to discern such inequalities and to provide leadership in their elimination. Similarly, feminist, queer, and other subaltern (subordinate) perspectives, regularly included in the syllabi of courses on social science theory, prescribed how human reality should be interpreted. Such theories in turn inspired the founding of new programs in and departments of African American, Latinx (formerly Latin American), women’s, gender, and sexuality studies, which were duly recognized as belonging within the social sciences across the United States. Because racial and sexual diversity were topmost on the political agenda of the cultural elite outside academia (being viewed within the elite as promoting equality between identity groups), the universities became politically dependent on the social sciences in the sense of being reliant on them to maintain the favor of the cultural elite. This, in turn, protected the position of the social sciences within the universities even as the STEM disciplines (science, technology, engineering, and mathematics), which generally failed to attract women and ethnic minorities (excepting Jews and East and South Asians) in significant numbers, received most outside funding. In contrast, the humanities, which had neither financial nor political utility, lacked such protection.

In a class of its own regarding authoritative status, the discipline of economics, from its beginning, oscillated between two theoretical and fundamentally prescriptive positions, both inherited from policy and philosophical debates of the 18th and 19th centuries. The classical, or liberal, position (regularly, though mistakenly, identified with Adam Smith) argued for free trade and competition and the self-regulation of the market. The opposing view, originally formulated by Friedrich List in the National System of Political Economy (1841), advocated state intervention and regulation, often in the form of protective tariffs. In the 20th century the interventionist approach came to be known as Keynesian economics, after the British economist John Maynard Keynes. After the Cold War, the classical theory was promoted largely under the name “economic globalization” and the opposing interventionist approach under the name “economic nationalism.” (That fact is ironic, as, historically, economic globalization had been an expression of the economic nationalism of the most competitive nations.) The oscillation between the two theories in economics broadly reflects status fluctuations among leading economic powers, as illustrated by the emergence of the United States—in the 19th and early-20th centuries the staunchest representative of protectionism—as the main champion of free trade immediately after World War II and by China’s analogous development as it rose to economic near dominance in the second decade of the 21st century.

One reason why there is no development in the social sciences—why, unlike the sciences, they cannot accumulate objective knowledge of reality within their domains—is that their focus is not their own: as discussed above, they shift in response to changing outside interests within the larger society. But social sciences can greatly reinforce those outside interests by creating the language in which to express them and by placing behind them the authority of science, presenting them as objective and “true.” In the frequent cases of correspondence between outside social interests and the self-interest of the social science professions, that capacity allows the social sciences to wield tremendous influence, directly affecting the legislative process, jurisprudence, the media, primary and secondary education, and politics in the United States (and, to a certain extent, in the rest of the Americas, Europe, and Australia). Indeed, within the long tradition of Western social thought, the “social sciences” stand out as one of the most powerful social forces—that power being due almost exclusively to their name. The intellectual significance of the disconnected, discontinuous efforts of which social sciences consist has been always limited and entirely dependent on the cultural clout of American society. In the 21st century, however, the increasing influence of East and South Asia (e.g., China and India) in world culture, economics, and politics has revealed the collective project of the social sciences as irrelevant to the concerns of societies outside the West. Claiming the authority of science but dispensing with objectivity, these academic disciplines, unlike the exact and natural sciences, can never become a common legacy of humanity. Remembered only as an episode, however influential, in 20th- and early 21st-century Western intellectual history, the social sciences could lose intellectual significance altogether.

Remarkably, the phrase “social science” came from Europe, where it stood for a science of humanity. In  Europe, the idea of the methodical pursuit of objective knowledge of humanity was entertained beginning in the 1840s, if not earlier. That science was necessarily conceived by analogy with physics—because biology as a science did not yet exist—and it was indeed called “social physics” by Comte, who later changed its name to “sociology.” The emphasis on society was suggested by the necessity to manage contemporary sensibilities. Unlike psychiatry and psychology, which were institutionalized as medical professions, the new comprehensive science of humanity would focus on what was human outside the individual, leaving the individual to the eventual science of biology—“organic physics” for Comte—which also figured prominently in his philosophy of science. That understandable compromise, however, jeopardized the future of the science of humanity: it was not appreciated how much was, in fact, in a name.

Early attempts at a science of humanity: Durkheim and Weber

At the turn of the 20th century, two European thinkers, Emile Durkheim in France and Max Weber in Germany, adopted the name “sociology” for the comprehensive science of humanity that both, independently, set out to develop. The subject-matter of the new science, Durkheim postulated, was a reality sui generis, of its own kind. It was, like life, autonomous, characterized by its own causality and irreducible to the laws of physics or biology, though existing within the conditions of those laws. Weber was not as explicit as Durkheim, but he, too, clearly recognized the autonomy of the human realm: without it there would be no logical justification for the existence of a separate science of humanity alongside physics and biology. Durkheim conceived of sociology as essentially the science of institutions, which he defined as collective ways of thinking (involving collective mental representations) and acting in various spheres of human life—e.g., in a family, in a market, or in a legislature. In Weber’s conception, sociology was the science of subjectively meaningful social action—i.e., action conceptualized or envisioned by the actor. Thus, for both, sociology was the science of symbolic reality, though Durkheim focused on symbolic phenomena at the collective level (today generally called “culture”), while Weber’s emphasis was on the individual level—i.e., the mind. Neither, however, stressed the symbolic character of his subject. Durkheim, for historical reasons, did not use the word “culture,” but Weber, before deciding in favor of “sociology,” thought of calling his project “cultural history.”

As a science of symbolic reality, of culture and the mind across the spheres of human life, sociology necessarily integrated history and could not be imagined as separate from it: for both Durkheim and Weber, sociology divorced from history would amount to a science separate from its data. The organization of the “social sciences” in American research universities and all the academic institutions built on their model would make no sense to either of them, in general. Of course, specializations focusing on major institutions—politics, economy, family, religion, science, law—would be necessary, and Durkheim had this in mind when he spoke of political science, legal history, and anthropology as “sociological sciences,” or subfields of sociology, just as genetics and ecology are subfields of biology and inorganic chemistry and mechanics are subfields of physics. Weber examined the construction of meaning in politics, economy, and religion. For him, as for Durkheim, to consider sociology as one among several self-contained “social science” disciplines, each with its own subject, would be analogous to considering biology a separate discipline from other life sciences.

Yet, neither Durkheim nor Weber succeeded in articulating a logically justified program of research for the human science they envisioned. The term “sociology” misled them. Focusing attention on society, it implied that humanity was essentially a social phenomenon, in effect assuming rather than analyzing its ontology. But a moment’s thought is sufficient to realize that society is an attribute of numerous animal species. As a corollary of life, it obviously belongs within the province of biology, automatically making sociology a biological discipline and entailing that all sociologists, as a rule unfamiliar with biology, are unqualified to be sociologists. (The same could be said for all of the other social sciences.) The existence of sociology as an autonomous science is justified only by the irreducibility of the reality it presumes to study to organic and material phenomena.

For all the persuasiveness of Durkheim’s lucid prose, however, it was not the existence of collective representations as such that explained the need for and justified sociology. Can one imagine a more rigidly structured social life, or one more clearly governed by shared, immutable, collective representations, than that of bees? Weber’s subjective meanings were equally inadequate—in this case not because of the evidence that animal actions, which are oriented toward the behavior of others, are also based on subjective meanings but precisely because there is no such evidence: the very subjectivity of such meanings makes it impossible for them to be accessed by others. What was needed, then, was positive evidence of a qualitative distinction between humanity and the rest of the animal world, something evidently affecting all human life, to which biology had no access. The intellectual milieu of both thinkers led them away from such evidence.

Despite explicitly postulating that the reality he focused on was sui generis, Durkheim never committed himself as to the nature of that reality. Although he was exclusively preoccupied with human social reality, his emphasis on the social obscured the distinctiveness of humanity and made it unclear why mental representations should be so central in his thinking. Durkheim’s attitude to psychology further complicated matters, leading him to insist strenuously that sociology was concerned only with collective representations and not with individual “ideas” and that it had nothing in common with the psychology and psychiatry of his day, which were predominantly biological, focused on the organ of the brain.

As Durkheim, in France, had to manage relations with scientists who doubted the scientific credentials of sociology, the difficulty that Weber faced in Germany had to do chiefly with philosophy: to pursue his research agenda, he needed to place himself outside the materialist-idealist dispute. As noted above, materialism was identified with the realm of the real and claimed all of empirical science as its own. Although action certainly belonged to the real, Weber’s interests lay with the empirical study of motives and ideas—which, philosophers would say, being ideal, could perhaps be intuited but could not be studied empirically. Weber thus declared action to be the subject of sociology, but he defined “action” as encompassing both action and inaction—as being both overt and covert, active and passive, comprising both decisions to act (to publicly express thoughts through acting) and decisions not to act—all of this insofar as it was subjectively meaningful for the actor. While enormously productive in the sense of directing so much of Weber’s work, that stratagem, however, was not successful: Weber’s sociology is still commonly interpreted as an idealist response to the historical materialism (see dialectical materialism) of Marx. But Weber was no more an idealist than a materialist. Both disembodied ideas and material phenomena (e.g., population, natural resources, death) interested him only in their meaning for the relevant actors—that is, the ways in which such ideas or phenomena interacted with the individual mind and were reflected in and interpreted by it. But the mind, populated as it was with ideas from the outside, was at every moment connected to the collective consciousness on which Durkheim focused. Durkheim’s collective representations, interacting with the mind, created subjective meanings—the central subject of Weber’s sociology.

Both of the founding thinkers of sociology thought of it as the science that investigates specifically human mental phenomena. Unfortunately, “collective representations” and “social action” were vague new terms that suggested many things to many people, so much so that neither of the two thinkers had any inkling of the close affinity between their projects. Being unable, because of the dominant intellectual trends in their respective countries, to name their subject matter clearly, they were also unable to determine or properly analyze its nature or to argue convincingly why it, and only it, justified the establishment of a new, independent science alongside physics and biology. In the meantime, in the United States, powerful vested interests already stood in the way of such a science.

See also Outline of A Future Science of Humanity

Cherryleaf Library for Children

CK1

For those of my friends, who have children in the pre-school – elementary school ages:

In Mind, Modernity, Madness I have written that, to arrest the ever-rising rates of functional mental illness in the United States, we would need to revamp our system of education, beginning from kindergarten. (As I am writing to my friends, I presume that you have read Mind, Modernity, Madness.) Onset is occurring earlier and earlier, so that mental disorder is common in middle school and in high school already rampant, and this means that the work of prevention, making children resistant to mental illness, has to begin at an age before the assault starts.

Since the agent of the disease in this case is cultural: the inability of modern – secular, egalitarian, and open – culture to provide large swaths of people with sufficient guidance for the formation of clear identity, the preparation and prevention must also be cultural: the intentional provision of such guidance to young children. To do so through the channels of the educational establishment would require the message of Mind, Modernity, Madness to achieve the status of self-evident truth – something that is evidently not happening right now and unlikely to happen in the near future, and for an entire generation of educators to be educated in its light and know how to help a child to form a clear identity.

The understanding of identity in our society is grossly underdeveloped and the vague ideas regarding it that exist are based precisely on the presuppositions that make the formation of identity in our society so problematic. The chief of these presuppositions, perhaps, is that each individual is born with an unchangeable identity – an essential self, which will, and must be allowed to, have an expression, for its repression condemns one to unhappiness and leads to mental disease. This presupposition encourages people to “discover” themselves, to do which they must focus on themselves, i.e., they are effectively educated to be self-centered. Alongside this presupposition of the essential individual self exists the contradictory idea of identity as the essential self of one’s biologically-defined group, racial (which, upon analysis, includes ethnicity) or sexual (which includes sexual orientation). This presupposition encourages the individual to discover in oneself the identity of one’s presumed group (which, of course, does in no way help one to develop a functioning identity, because it does not locate one in a clear position on the socio-cultural terrain) and to focus on the political defense of this group’s rights, specifically demanding that the group be treated in every respect equally with other groups. Paradoxically, the two presuppositions (of the essential individual self and of the essential biological group self) are combined in the popular consciousness and educational curricula reflecting this consciousness.

To combat this on the level of the educational establishment is beyond the powers of any individual or a small group of individuals. A revolution, a complete breakdown of the social order and the construction of a new one in its place, would be needed to effect the required change of thinking. The only way to help children to form functional identities (and identities are formed, not innate, reflecting some inner essence) and prevent their developing functional mental illness is to do so from the outside of the educational establishment. Children are raised in and by culture, a process consisting of numerous specific processes, interrelated in numerous distinctive ways: the family process, the educational establishment (institutional) process, religion, media, literature, and so on. Our culture, in general, does not offer us sufficient guidance in forming our identities, but some of its constituent processes do help. Formal educational institutions, that is, institutions specifically entrusted with the transmission of the dominant cultural messages, as might be expected, reinforce this cultural insufficiency and the family, which is all-important in the child’s early years, is quite likely to reflect it, because the parents are products of the culture themselves. But literature, for instance, the books we read to our children, when they are little, and books they start to read themselves, is far more heterogeneous. While most books in our bookstores would transmit dominant cultural messages (for instance, the two presuppositions inimical to the formation of a clear stable identity, mentioned above), there are some that can provide a counterweight to them. If organized into a systematic program and read at home from a very early age through kindergarten and elementary school (and, perhaps, in some kindergartens and elementary schools, where individual teachers would appreciate their benefits) such books could help children to form firm identities, which would in turn enable them to withstand the assault of contradictory messages of our secular, egalitarian, open (anomic) society and protect them from mental disease.

Books that can help in the process of identity formation in an anomic society do so in a way very different from societies that simply impose identities on people in them by limiting individual experience to a particular, very limited area on the cognitive map of the socio-cultural terrain. They do this, instead, by presenting one with the picture of human behaviors, probable in open societies, distinguishing (in the manner of presentation) between right and wrong, good and evil actions, and provoking sympathy with the suffering of others and antipathy towards those who cause this suffering. The confusing reality of contemporary society is simplified, presented, underneath apparent heterogeneity of observable behaviors, as the confrontation of good and evil, defined basically as kindness vs. cruelty (intentional causing of suffering), with other behaviors and attitudes ranged in between.

When one’s cognitive map of the socio-cultural terrain is drawn in these simple terms, being a good person becomes the core of one’s ideal identity – what one strives to be, the goal of one’s self-realization. One is encouraged to take advantage of the freedom and equality offered by the open society not to “discover,” but to “make” oneself, to cultivate one’s empathy (which presupposes focusing on others), to be actively kind, useful to those who are weaker, in need of help, and must be protected from suffering. Competitiveness, constant self-comparison to others, self-measurement against them in quantitative terms of relative achievement and virtue (do I have more or less money, accolades, professional success, intelligence, beauty, and so on, than x, y, z, to whom I should be equal) which are encouraged by the egalitarianism and freedom of the open anomic society, in turn encouraging envy, self-doubt, insecurity, sense of inferiority which contribute to social maladjustment and in so many cases ultimately lead to mental illness, fade into near-irrelevance. One’s identity-map is no longer the map of an endless race-track with oneself as one of the racers, constantly in danger of being left behind or overturned. It is no longer one’s comparison to others, but the calls for one’s help that determine one’s position on the map; wherever they come from on one’s socio-cultural terrain, there one gravitates, one’s conduct is oriented by these calls, by thinking about the needs of others.

This must appear too simplistic a characterization of the complex masterpieces of the 18th and especially 19th centuries, be it Dickens’s Great Expectations, Flaubert’s Madame Bovary, or Dostoyevsky’s Brothers Karamazov, and, of course, the message is more or less explicit even in different books by the same author. Such, nevertheless, is the common basic message of the great modern – psychological — novel, called into being by the need to make sense (for the authors, in the first place) of the secular, egalitarian, anomic society. All these novels treat of the provocations with which anomie of the open society presents to the individual, unanchored by a clear identity, all see mental illness as a constantly lurking danger. Depictions in black and white contrasts as in the “sensationalist” best-sellers of Wilkie Collins are not to be met among greater artists, whose novels are likely to focus on the behaviors of the middle range, eschewing absolute good and absolute evil. Still, they all advise: be guided by the understanding of fundamental right and wrong, focus on the world, not yourself, be kind, above all, and things may turn all right – at the very least, they won’t go horribly wrong: you won’t go mad.

Psychological novels of the 18th, 19th, and early 20th centuries, from Moll Flanders to An American Tragedy are a great antidote to the disorientation of the open egalitarian society and a very powerful educational tool. Whether encouraged at home or integrated into school curricula, this literature can be very helpful for young people trying to come to grips – and form their identity – in the baffling anomic world. This literature is rich and may provide emotional support for years. Unfortunately, it is not a preventative therapy, because it cannot be administered to pre-teenage children. Yet, it is among the pre-teens that defenses against anomie must be built.

Identity-formation-facilitating literature for young children, ages 3 to 12, where it is especially needed, is very sparse. In English, even if one includes translations, there is nothing of this nature for children under 6 or 7 years of age. Taking whatever exists into consideration, I set myself the goal of creating an English-language corpus of such identity-formation-supporting children’s literature, organized as a continuous stream of age-appropriate reading from stories to toddlers through older preschoolers, kindergarten, and to older elementary school children. Readings for 3-5-year-olds would have to be created, and I intend to use the work of the exceptional Russian writer Korney Chukovsky, who wrote for very young children, as the foundation for this segment. Chukovsky’s goal in writing was to cultivate in the child kindness (humane disposition) and empathy, “this marvelous ability to worry about other people’s misfortunes, to rejoice at other people’s joys, and to experience another person’s destiny as one’s own.”  He did not think that Russian children of the early 20th century needed aid in identity-formation, but his “tales” provided this aid nonetheless, while teaching the child to focus on others, not on oneself, and to consider being good, actively kind, to the defenseless and helpless as the most important quality of a person. In fact, Chukovsky’s tales are the only equivalent of modern psychological novel for very young children – at least, the only one which is relatively well-known. Even in Russian, which can boast of a very distinguished tradition of children’s literature, there is nothing else of the kind.

Having for over a century contributed to the upbringing of Russian, Soviet, and post-Soviet children (and served as a counterweight for literature faithfully expressing the social values and cultural presuppositions, dominant in each of these periods), Chukovsky’s tales have been translated into many languages. Even in English there are some translations. The problem is that in the original these tales are poems. While in the original text, reflecting the creative process in the mind of the author, prosody and content develop organically, mutually inspiring and reinforcing each other, in translation, the very desire to keep the rhymed form obscures the meaning of the tale and interferes with the delivery of its message. Therefore, instead of attempting another rhymed translation, I decided to re-tell Chukovsky’s tales in prose. I selected five of them that appeared to me most directly relevant to the project of assisting very young children in identity formation – preparing them to meet the challenges, while resisting the pressures, of our society, and immunizing them to some extent against mental disease. I hope to publish them as individual picture books, which would allow parents to read and re-read them to their children and children to leaf through them and let illustrations remind them of the story told for months. But securing a publisher may take a long time, and I would like to make the identity-formation-supporting literature for the very young available immediately. So, please, watch for Cherryleaf Library for Children on YouTube: I’ll read the tales on video as soon as I figure out how to do so.

Why Cherryleaf? In honor of my mother, Victoria Kirshenblat (Kirschenblatt = Cherryleaf), who was an exceptionally good person, actively kind under all circumstances, daily diminishing suffering wherever she found it – among people and animals alike. She was a pediatrician by the grace of God, an extraordinary children’s doctor. For decades she had patients whose parents were her patients; by the end of her working life, she had patients whose grandparents were her patients. While in medical school, she thought of becoming a psychiatrist. So she was acutely aware of the realities of mental disease. She would certainly support this new undertaking of mine, and I prefer it to be associated with her name, rather than with mine: “Professor Greenfeld” would mean nothing to children and nothing but an imposition of academic authority to their parents.

I also intend to start reading books for the slightly older, 6-year-old+ children, beginning with “Nobody’s Boy” by Hector Malot. This is one of the tiny identity-formation-assisting corpus of literature for this age-group that I mentioned. It is available in English, but is quite unknown, and reading it aloud online, I believe, would attract more attention to it than simply recommending it. So please watch for the Cherryleaf Library podcast.

Liah Greenfeld

Approaching Human Nature Empirically

April 29, 2017, talk at the “Revisiting Maslow: Human Needs in the 21st Century” workshop at Princeton University.

Abstract: Human nature has been a subject of speculative thinking for ages. But it is an empirical phenomenon and, as with any empirical phenomenon, the only way to acquire objective, reliable knowledge about it is to begin with separating it from adjacent empirical phenomena. Today, developments in biology allow us to know exactly what separates humanity from the animal world, to which it also belongs — we, therefore, no longer need to speculate what makes us human. The talk will focus on what comparative zoology teaches us about human nature.

 

Humanity differs dramatically from non-human social animals in its behavior and the organization of its ways of life. No other social species, for instance, organizes its ways of life (i.e., its social order) so variably or allows such an enormous scope for individual creativity/deviance. Human societies are remarkably flexible in comparison to the rigid social orders of other intelligent social species, such as lions, wolves, and even primates. This vast difference makes humanity a reality of its own kind and, therefore, a separate subject of study, which justifies the existence of social sciences and humanities: for social sciences, too, despite their name, do not focus on the many complex and highly differentiated societies that exist, from those of ants and bees to those of baboons and chimpanzees, but exclusively on human societies. Yet, it proved impossible to account for this difference by the difference of the human animal from other animals, or genetically. Genetically, our species is different from other animal species to different degrees, clearly much more different from some than from others. But even species within our biological family – the primates – remain as drastically different from us behaviorally as are those on altogether different branches of biological evolution. Only 2% of the genome of our species, which we somewhat arrogantly and in ignorance of other animals’ cognitive capacities call homo sapiens, distinguish our animal nature from that of our closest genetic relative, the chimp. These two percent must account for the different shapes of our and chimps’ bodies and sculls, postures, distribution of bodily hair, the different organic diseases to which we and they are subject, our different life spans and procreation patterns – all the things that separate two closely related species from the same family, such as horses and donkeys, lions and tigers, or chimps and orangutans, etc., from each other. None is left in this small biological difference for the explanation of the vast differences on the behavioral level – the difference in the organization of our ways of life, which make humanity stand so completely out of the animal world.

But, if it is not our biology, then what? Comparative zoology provides an answer. While all other animals transmit their ways of life (primarily) genetically, we transmit our ways of life almost exclusively by means of symbols. This is a qualitative difference – a difference in kind, which puts us into a different category of being. This is what makes us human.

The process of genetic transmission is the process of life, the subject of biology. As animals, we are involved in this process and, as living beings, we share numerous biological needs with other animals. Like them, we must breathe and eat, like many of them, we must have shelter, like all the social animals, we must belong to a group, and so on – to survive as animals. But, in addition to being animals, we are also something else. In addition to being part of the process of life and subject to its laws, we also participate in a different, autonomous process (that is, a process ruled by other than biological laws) – the process of symbolic transmission of our ways of life, which we call culture. Culture adds a dimension on top of our animal nature, superimposing another nature on it, so to speak. It is this other superimposed nature that makes us human. Human nature is cultural, in other words. To understand it, we must analyze culture.

In the rest of the talk, if there is time, I can develop on this and actually analyze culture along the lines of the two first chapters of Mind, Modernity, Madness: The Impact of Culture on Human Experience. What is clear even before such an analysis is that human needs (which are different from animal needs which can be deduced from the nature of the life – organic, biological – processes, which may be largely the same for a whole branch of the phylogenetic tree or, to revert to the Linnaean terminology, a whole class of animals, such as mammals) can only be deduced from the nature of the cultural process, in general, and, because of the extreme variability of cultures, from the nature of specific cultures. With human needs created by cultures, it would not stand to reason that needs of people in Tokugawa Japan, Imperial China, or tribal Arabia, for instance, would be the same as the needs of citizens of modern democracies.

When animals and birds take on our own characteristics

By Liah Greenfeld

First published in South China Morning Post, June 28, 2015

Have you heard of Alex, the African Gray parrot, considered the smartest bird in the world? He lived in a cage of an animal behavior lab at Brandeis University, where his trainer, the scientist Irene Pepperberg worked. He spoke English with a sweet childish voice, could count and distinguish shapes. He was sensitive and creative. When Irene appeared flustered, he would tell her: “Take it easy. Calm down.” His beak made it difficult to pronounce the letter “p,” so, when asked to identify an apple, he invented the word “banerry” — half banana, half cherry, and, not knowing how to call a cake, suggested “yummy bread.” Irene trusted him to train younger chicks and, for some years, he actually taught at a university. At the end of their working day, Irene would return Alex to his cage, lock the lab, and go home. This is how it was the night of the heart attack that would kill him. Before she left, Alex told Irene: “I love you. Be good.” The next morning, she found him in the cage dead.

Alex’s brain was the size of a walnut. But his behavior was undeniably human. Which raises the question what is humanity. Clearly, to behave — to think, feel, act — like a human, it is not necessary to belong to the biological species of hairless monkeys with big brains, such as ours. But, if it is not our vaunted brain that makes us human, what does? Comparative zoology provides the answer. In the entire animal kingdom only humans transmit their ways of life symbolically, rather than genetically. Such symbolic transmission is what we call “culture.” The distinguishing characteristic of humanity, it is culture that makes us human.

In distinction to genetic transmission, culture is not an organic but an historical process, because the meaning of symbols changes with the context and always depends on time. Being of a different nature, culture cannot be explained biologically or reduced to biological phenomena, even though it requires the body with its physical needs to exist. Rather, and analogously to life, which also requires inanimate matter for its construction but cannot be reduced to or explained by it, culture represents an emergent reality, resulting from a most improbable accident in the organic reality within the conditions of which it emerges. Like life, it is a reality of its own kind, autonomous or operating according to specific to it causal laws, which affect the organic processes related to it and transform its physical environment.

We are all familiar with the dramatic effects of culture on the material world around us: our cities, means of transportation, the clothes on our backs, fields we till, land we reclaim from the oceans — all these are products of culture, material results of symbolic processes, of our thinking expressed in words, designs, plans. On the organic level, culture leaves its deepest imprint on the brain of the creatures it affects, transforming their very nature and life. For, as it forces the brain exposed to it to process symbolic stimuli, it creates within it an autonomous, symbolic and mental, phenomenon which is unknown in the natural world in which the brain processes only sensory stimuli — the mind. Otherwise called the soul, it, speaking empirically, is none other than culture in the brain. One becomes human when one acquires a mind.

The mind is acquired as a result of being exposed to culture and the necessity to adjust to a cultural environment; it is not a genetic characteristic. This has two significant implications. The first one is that nobody is born human. A baby of human parents is just an animal who is very likely to become human, not a human being, and given how prolonged infancy is among our animal species, only rarely do our babies develop a mind (and acquire humanity) before three years of age. The second implication is that animals of other species that procreate exclusively in the human, cultural, environment, such as, specifically, dogs and cats whom for thousands of years we have involved in most intimate aspects of our life, are, just like us, sharply distinguished from wild animals by culture, and, therefore, also human. What distinguishes these animals from us is not that we have a mind, while they don’t (because they certainly adjust to and thus have culture in the brain), but that the structure of their larynx — in distinction to that of African Gray parrots, for instance — is different from ours and does not allow them to articulate sound, depriving them of speech. They are humans who are physically disabled. This is in particular true of dogs, whose brain, inherited from arguably the most intelligent wild animal — the wolf — probably equals ours in its complexity and sophistication. (The claim of dogs’ humanity will resonate with anyone who has known a dog’s companionship, though the unquestioned identification of humanity with our species would have prevented most of us from admitting the truth of this claim even to oneself. Yet, very few would be able to explain until now what made homo sapiens species human.)

It would be hard to exaggerate the ethical significance of this logical inference from the empirically based definition of humanity. Our treatment of dogs and cats becomes subject to the same standards of judgment which we apply to our treatment of other defenseless and helpless members of our societies, such as little or disabled children. Like them, these acculturated animals are thrown on our mercy and entirely dependent on us for their survival and protection from suffering. When they suffer, their experience is not different from that of such children. Because these animals are part of humanity, because what makes us human must make them human as well, decent people and societies can no longer be indifferent to their suffering or tolerate intentional cruelty in their regard.

On June 22, the annual dog meat festival begins in Yulin. Dogs are human. Thus, this is a festival of cannibalism. But it is not what happens after death that is important. Before they are butchered and eaten, thousands of dogs are caught, shoved into dirty crates too small for the numbers they contain, and tracked, hungry, thirsty, suffocating, and terrified, to the place of their death. You can easily find photos of these transports on the Internet. Meet these dogs’ eyes. You won’t be able to sleep for weeks.

A Revolution in Philosophy (and Social Science) in 800 Words

By Liah Greenfeld

First published as “Modern social science deeply indebted to Darwin” in South China Morning Post, June 7, 2015

We are all aware of the power of science and treat it as supreme authority in matters pertaining to our understanding of our world. Of all intellectual endeavors, science alone proved progressive — building and constantly adding to previously accumulated understandings, expanding their reach. This is evident in physics and biology: our understanding of both material and organic realities becomes deeper, increasing our control over them. Not so in social sciences focusing on the reality most pertinent for us: humanity itself.

We hardly understand humanity better than in the end of the 19th century, when the social sciences were first ensconced as such in American research universities — the model for the entire world. Separated by arbitrary divisions which obscured the commonality and the very nature of their subject, social sciences were misconceived from the start. They assumed that society was humanity’s distinguishing characteristic, while it is a corollary of animal life.

What distinguishes humanity from the rest of the animal world is not society, but the way it is transmitted: while the other species rely on genetic transmission, humans rely on culture (or symbolic transmission). Much more flexible, cultural transmission explains the variability of human societies as compared to the near-uniformity of social orders within all other species. Culture and not society should be the focus of the social sciences.

But, if culture is, as is commonly assumed, a function of the human brain, social sciences must belong within biology. The only thing that would justify their existence as autonomous is the irreducibility of this distinguishing characteristic of humanity to the organic and material realities. If humanity is not a reality of its own kind, they represent biological or physical disciplines and social scientists, usually biologically and physically illiterate, are unqualified to be social scientists.

To prove such irreducibility — that is, to prove that the distinction between humanity and other animals is qualitative, not quantitative, one needs, first, to resolve the 2500-years-old central problem in Western philosophy. Western philosophy pictures reality — the entire world of experience — as a universe composed of two heterogeneous elements, matter and spirit, which, derived from one source and thus assumed to be fundamentally consistent, nevertheless appear to be contradictory.

Both elements may be accessible to reason, through observation or faith, but their assumed consistency escapes logical and empirical proof. This was acknowledged by the 19th century. From this acknowledgment resulted the division of intellectual labor: the realm of the spirit going to speculative philosophy and empirical science becoming the authority over (while limiting itself to) material reality. All empirically accessible reality was deemed material and it became impossible to imagine an empirical science that was not a part of physics.

Fortunately for students of humanity, the psycho-physical problem was resolved in 1859 by Charles Darwin. This was a colossal problem for biology as well: Life, too, could be approached scientifically only through physics, but it proved impossible to explain its regularities through physical laws. Thus, the science of biology did not develop: our understanding of living phenomena by 1859 had hardly advanced beyond Aristotle.

Western philosophy – our fundamental vision of reality – did not allow for the autonomous science of biology. A new ontology was needed, which Darwin provided in The Origins of the Species. By demonstrating a form of comprehensive causality operative in life that had nothing to do with the laws of physics but was logically consistent with them, Darwin established life as an autonomous, empirically accessible reality, dependent for its existence on material elements, but irreducible to them and, as concerns causal mechanisms, not material. Thus he transcended the dualist, spiritual material, ontological vision and liberated empirical science from the hold of materialist philosophy.

Now one could imagine empirical reality, accessible through observation, as consisting of heterogeneous, though logically consistent, layers, material and organic, and, within this new ontological framework, biology, unchained from physics, rapidly developed. Influential philosophers still think Darwin established a unified framework in which everything can be understood as a derivation from fundamental physical laws; in fact, he established precisely the opposite.

Though the concept was created later, he gave us the possibility to think of empirically accessible reality, open to scientific investigation, in terms of emergence, as of autonomous layers, each upper layer existing within the boundary conditions of the one below, to which it is causally irreducible.

There are three such layers, the two upper ones emergent — the material, the organic, and the cultural (or symbolic). This justifies seeing humanity as a reality of its own kind and the existence of an autonomous group of scientific disciplines focused on it and its distinguishing characteristic — culture.

This view may save lots of resources wasted in futile attempts to analyze social structures without knowing anything about biology and to reduce culture to the brain, while motivating a systematic exploration of the fascinating subject social sciences now mostly overlook. Perhaps, social sciences, too, will deepen our understanding of the world.

Liah Greenfeld is University Professor at Boston University and Distinguished Visiting Professor at Lingnan University. This article is based on her lecture delivered recently at the University of Hong Kong.

Computers Vs. Humanity: Do We Compete?

By Liah Greenfeld and Mark Simes

From our point of view, the “us all” object of the question—”Will computers outcompete us all?”—refers to human beings, and presumes that the individual and collective human capacities—particularly, the capacities of the mind, or intelligence—are essentially comparable to the capacities of computers. Only on the condition of the essential comparability of human intelligence and the cognitive capacities of computers does the question of this symposium make sense. The answer, therefore, entirely depends on whether these capacities are indeed so comparable, consequently bringing into question the nature of human intelligence and thus of humanity.

Admittedly, the biological, or neuroscientific, response to this question is unclear. The prevailing approach in the field of human neuroscience emphasizes the size and complexity of the human brain vis-à-vis other nervous systems in an attempt to explain the unique qualities of human intelligence. The logic that supports this approach is based on the assumption that an increase in neuronal density and network complexity necessarily results in the appearance of qualitatively new cognitive capacities. The perceived task of neuroscience, therefore, is to unpack the complexity of the human brain to find the “missing-link”—or links, for the sake of complexity —that result in something akin to the cogito of Descartes.

The concept of technological singularity is based on a similar logic and imagines a process that travels from the original point of human intelligence in the opposite direction of biological reductionism, though its principles are fundamentally the same. Futurists predict, whatever the technological medium may be, engineering a sufficient increase in computational complexity will result in machine intelligence that replicates and perhaps surpasses the cognitive capacities of the human brain. The futurist-technological position, therefore, seeks to re-pack the processing complexity of the human brain to arrive at virtual human minds.

Gerard Edelman cites the incredible complexity of the human brain in his book, Bright Air. Brilliant Fire. In the cortex alone, he writes, there are about 10 billion (1010) neurons. The actual connections between these neurons may number one million billion (1015). As for possible connections in this matrix, Edelman writes this number is “hyper-astronomical”; he must mean this in a very literal sense because he then goes on to indicate it exceeds the number of positively charged particles in the known universe. Edelman’s preliminary conclusion from these incredible facts is that the size and complexity of the human brain make it “so special that we could reasonably expect it to give rise to mental properties” [1].

What is overlooked in such paeans to quantity and complexity, however, is the astounding regularity with which these connections/networks/brains seem to form in the billions of individual humans who span distances and generations. The essential question may not be how does this complexity give rise to human intelligence or consciousness, but instead how does this complexity become systematically ordered so that any process that an individual brain supports (that is, any individual mental process) becomes an organized, patterned process—which is to say nothing of its self-intelligibility or its intra-species communicability.

The theory of evolution provides us with an explanation of how complex nervous systems evolved in multicellular organisms, allowing animal bodies to interact with a dynamic and unpredictable external environment. This dynamism and indeterminism of stimuli in the environment are correlated with the nervous system’s unique physiological characteristics: The capacity for neurons and networks to organize learning and memory. Interactions with external stimuli effect changes in the nervous system, which organize and solidify networks of neurons to respond and combine in ways that reflect the influence and challenges of the species’ environment. In every case therefore, it is a combination of the genetic information of a species and its interactions with the environment that organize the networks of its nervous system.

In the biological world, stimuli occur as signs to an organism directly conveying information derived from a physical-chemical aspect of its referent in the environment. Empirical investigation (that is, investigation of actually existing characteristics) of human cognitive processes, however, shows humanity is essentially unlike any other animal species in this one crucial respect, from which numerous characteristic features derive. Unlike the rigid, determined relationship that animal nervous systems and societies have with signs in their environment, the defining feature of human mental stimuli is laden with meaning that cannot be traced to the physical-chemical constituents of the medium in which it is delivered. Instead, the primary stimuli in human mental life are symbolic. While all other animal species process signs in their environment and transmit their ways of life (including their social organization) genetically, humans are constantly interacting with, and transmit their ways of life by means of, symbols.

Symbols are intentionally articulated signs and, in sharp contrast to signs, they represent phenomena of which they are not a part. In this sense they are arbitrary, dependent on choice. The meaning (the significance) of a symbol is given to it by the context in which it is used and this context is constituted by associative relationships to other symbols. Language is the clearest example of this feature; words are not definite and linguistic communication is both a creative act on the part of the producer and an interpretative act on the part of the receiver. As a result of the dynamic, ever-changing meaning of symbols and their contextual dependence upon an equally dynamic matrix of other symbols, the significance of any instance or set of symbols is both constantly changing and endlessly proliferating. It is this dynamic change and self-proliferation of symbols that creates the innumerable variability among human minds and human societies. We call this symbolic process of transmission of human ways of life culture and assert that it is the symbolic nature of culture that constitutes the causal force in human history [2].

In the words of a great historian and philosopher of history Marc Bloch, historical science, which focuses on human history whose subject matter and data all social sciences and humanities share, is the science of the mind. It is focused on the qualities and permutations of human consciousness. Indeed it is one such permutation—claimed to be singular and unprecedented in its dimensions and importance—that the concept of technological singularity predicts. The verdict regarding technological singularity depends on whether history allows for such a singular and absolutely unprecedented change, or whether all great historical transformations, of which there have been many, are fundamentally the same. This leads us to consider the nature of human consciousness itself—the mind.

For those who, while perhaps experts in other areas, consider humanity only from the perspective of laymen, the mind is just another name for the brain. Thus Dan Dennett without much ado equates the human person with “the program that runs on your brain’s computer.” This lay perspective, which reduces humanity to a biological species, qualitatively, that is essentially, equating it with all other biological species, from which it may then be distinguished only quantitatively, is a necessary background for the concept of technological singularity. Only in its framework the question “Will computers outcompete us all?” makes any sense, and only in its framework it can be raised and answered.

In contrast, we argue culture makes humanity, and therefore human intelligence, a reality sui generis—a reality of its own kind. It is this process of transmission, which is unique in the animal kingdom, that explains only humans have history and in distinction to even the most remarkably sophisticated, minutely stratified, and rigidly structured, animal societies—such as those of bees, of wolves and lions, or of our closest primate cousins—human societies are almost infinitely variable across distances and generations. Culture constitutes a world of its own: an autonomous, self-creative world that functions according to historical laws of causation that do not apply anywhere in non-symbolic reality.

Of course, the symbolic, historical world of culture is supported by the mechanisms of the human brain, without which it is certain; it could not have emerged in the first place. The use of every symbol, the perception of its significance, its maintenance and transformation, is supported by the mechanisms of the individual brain and reflected in some, not necessarily specific, physical-chemical neuronal activity. Therefore, the symbolic and historical cultural process is also a mental process. But it is not from originating repetitively in newborn individual brains that culture endures, it is instead a ready-made, cultural environment, rich with symbolic stimuli, into which all new human brains are born. Culture is the symbolic process by which humans transmit their ways of life on the collective level, but on the individual level—the level of the individual human being with his or her brain in which this process is active—this process is called the mind. This symbolic process on both the collective and individual level is at every moment the same process, separated only by the focus of analysis (i.e. whether it is psychological or sociological). Thus, we can accurately call the mind “culture in the brain.”

In certain respects the brain can be compared to a computer. However complex the former is in comparison to the latter, the difference between them is quantitative, pertaining to how much information from the outside each can process and how fast and accurately. But the mind is an altogether different matter: It is not a more powerful brain than any other we know, because it is not a brain at all, and for this reason it cannot be compared to even the most powerful computer imaginable. The mind, as suggested by its definition as “culture in the brain,” instead, is a symbolic process representing an individualization of the collective symbolic environment. While the mind is by no means equivalent to the brain, it is certainly supported by the brain at every moment in the process and it may be, in fact, the symbolic processes of the mind/culture that organize the connective complexity of the individual brain.

Thus, in distinction to both the current neuroscientific paradigm and the approach of futurists who equate complex structure with emerging, intelligent capacities—remember, the foundation of these two schools are fundamentally identical—we hypothesize the symbolic, cultural environment is causally responsible for reining in the hyper-astronomical complexity of connective possibilities in the human brain. Furthermore, we argue mapping and explaining the organization and biological processes in the human brain will only be complete when such symbolic, and therefore non-material, environment is taken into account.

This approach, although most directly relevant to human neuroscience, has important implications for any project in artificial intelligence. First, it places primary emphasis on the significance of symbolic processes rather than on configuration/capacities of hardware, assuming no transformation of quantity into quality (which is assumed by the concept of technological singularity). Second, it implies the symbolic nature of human mental processes must be the central focus of any effort to replicate human intelligence artificially. In distinction to previous analogies in the philosophy of mind, it also does not liken the mind/brain relationship to systems of software/hardware. This is because the mind, the symbolic cultural process, is a self-generating and endlessly creative process—a feature that no dynamic code structure begins to approximate.

In neuroscience it is illogical to dig into the minutiae of the structure and function of the brain, with the expectation of explaining how our biological nature may have, at one original point, given rise to symbols. This activity is retro-speculative in an unscientific sense—even Darwin fervently highlighted the inability of science to explain origins.1 What we do have empirical access to is evidence of the human symbolic process all around us; the mind, though symbolic and therefore non-material, constantly creates material by-products and leaves material side effects (such as buildings, roads, domesticated animals, pollution, and computers) outside of us. As scientists we do have the possibility of taking this unique type of data into account while analyzing the incredible organ that is constantly involved in interpreting and generating the symbolic stimuli and perhaps apply our understanding to virtual models that more accurately represent the unique nature of human intelligence.

In the present paradigm, however, computers no more compete with minds than speed-trains or fast-running cheetahs compete with Shakespeare (a comparison which, however lame, is possible). A core quality of the symbolic and historical process of human life, which distinguishes humanity from all other forms of life, making it a reality sui generis on both the collective level (as culture) and on the level of the individual (as the mind), is its endless, unpredictable creativity. It does not process information: It creates. It creates information, misinformation, forms of knowledge that cannot be called information at all, and myriads of other phenomena that do not belong to the category of knowledge. Minds do not do computer-like things, ergo computers cannot outcompete us all.

References

1] Edelman, G. Bright Air. Brilliant Fire: On the Matter of the Mind. Basic Books, 1992.

2] Greenfeld, L. Mind. Modernity. Madness, Harvard University Press, Cambridge, 2013.

[Originally published on ACM Ubiquity]

The Maddening of America

The relative global decline of the United States has become a frequent topic of debate in recent years. Proponents of the post-American view point to the 2008 financial crisis, the prolonged recession that followed, and China’s steady rise. Most are international-relations experts who, viewing geopolitics through the lens of economic competitiveness, imagine the global order as a seesaw, in which one player’s rise necessarily implies another’s fall.

But the exclusive focus on economic indicators has prevented consideration of the geopolitical implications of a US domestic trend that is also frequently discussed, but by a separate group of experts: America’s ever-increasing rates of severe mental disease (which have already been very high for a long time).

The claim that the spread of severe mental illness has reached “epidemic” proportions has been heard so often that, like any commonplace, it has lost its ability to shock. But the repercussions for international politics of the disabling conditions diagnosed as manic-depressive illnesses (including major unipolar depression) and schizophrenia could not be more serious.–Liah Greenfeld

Mind, Brain, and Culture

“You cannot understand the mind without understanding how the brain works”—Patricia S. Churchland, Touching a Nerve

“Each human brain is part of a dynamic, interacting system  of other brains embedded in culture… why are the shops full of books such as Touching a Nerve, which show that it is the brain that makes decisions, determines moral values and explains political attitudes?  I can only assume that these are the modern equivalent of Gothic horror stories. We love to be frightened by the thought that we  are nothing more than the 1.5 kilograms of sentient meat that is our brain, but we don’t  really believe it”—Chris Frith, “My Brain and I,” Nature

“While culture can be referred to as “collective mind,” the mind can be conceptualized as “culture in the brain,” or “individualized culture.” These are not just two elements of the same—symbolic and mental—reality, they are one and the same process occurring on two different levels—the individual and the collective, similar to the life of an organism and of the species to which it belongs in the organic world. The fundamental laws governing this process on both levels are precisely the same laws and at every moment, at every stage in it, it moves back and forth between the levels; it cannot, not for a split second, occur on only one of them. The mind constantly borrows symbols from culture, but culture can only be processed—i.e., symbols can only have significance and be symbols—in the mind”—Liah Greenfeld