Gregg Henriques Ph.D.

Evolutionary Psychology

The conceptual unification of psychology, a unified framework will benefit the field.

Posted February 29, 2012

Have you ever looked across the field of psychology, and noted with some dismay all the various approaches taken in the discipline? Have you ever found yourself wondering if psychology was really a coherent discipline, and if so, why was it so hard to clearly define? Or have you ever considered whether all those approaches could someday be unified, such that the key insights from the various branches and paradigms could coherently connected into one grand, metapsychology?

Questions like these awakened in me a deep intellectual curiosity that ultimately culminated in the development of the " unified theory ". Trained as a clinical psychologist, I was fortunate in that early in my graduate education I gained a rich exposure to the psychotherapy integration movement, which is the idea that the best of the best approaches to psychotherapy should somehow be integrated. This led me to many important realizations about psychotherapy, including: a) many of the "single" schools were defined against one another both conceptually and politically; b) no single school had the depth and breadth in both the humanistic and scientific domains to offer a comprehensive solution; and c) much overlap between the schools becomes apparent as one becomes proficient in their language and concepts. However, despite these problems, there were significant difficulties in achieving a coherent integrative view.

First, the competing schools clearly had different (although often implicit) moral emphases. Second, if one considers, as I do, psychotherapy to be the application of psychological principles in the service of promoting human well-being, then it follows that the disorganization of psychological science seriously hampers, if not completely prevents, the development of a coherent, general approach to psychotherapy.

Although now obvious with the benefit of hindsight, I essentially backed into this second point. I was looking for basic, core conceptual commonalities that cut across the various perspectives in psychotherapy and started to explore a broad array of literatures. Fortunately, evolutionary psychology was just beginning to make a major impact on the field, and in it I found a major piece of the puzzle. All the major perspectives were grounded in an evolutionary perspective, thus this could provide a shared point of departure from which to view each of the competing paradigms.

The Development of the Justification Hypothesis

Learning about evolutionary theory set the stage for what I consider to this day to be a major theoretical break through-an idea I came to call the Justification Hypothesis . So, what, exactly is the Justification Hypothesis? Technically, it is the idea that the evolution of language created the adaptive problem of social justification and this adaptive problem shaped the design features of the human self-consciousness system. In more general terms, and more importantly for this context, it means that we can think about the organization of human reflective thought and human culture in terms of justification systems.

Although it would take years to develop into a formal proposal, the proverbial "flash" of insight came on a drive home after completing a psychological evaluation on a woman hospitalized following a suicide attempt. In her late thirties, she was diagnosed with a double depression and an avoidant personality disorder . A woman with an above average intellect, she had graduated from high school, worked as a teacher's aide and lived in almost complete isolation on the brink of poverty. In a reasonably familiar story line, her father was an authoritarian, verbally abusive, alcoholic who dominated her timid, submissive mother. He would also be physically and violently abusive to her older brother, who was much more defiant of his power. She distinctly remembered several episodes of her father beating her brother, while yelling at him that he needed to be more like his obedient sister. Perhaps the most salient feature of this patient's character structure was her complete sense of inadequacy. She viewed herself as totally incompetent in almost every conceivable way and expressed an extreme dependency on the guidance of others. In presenting the case to my supervisor and classmates, I argued that the network of self-deprecating beliefs served an obvious function, given her developmental history. Namely, the beliefs she had about herself had justified submission and deference in a context where any form of defiance was severely punished. It was the first time I explicitly used the concept of justification to describe how language-based beliefs about self and others were functionally organized.

I arrived home about a half an hour late following the discussion about the patient and found myself explaining to my wife that traffic was particularly bad. Traffic had been bad, but the reality also was that it only accounted for about ten minutes of my tardiness. I had left work twenty minutes later than anticipated because I was eagerly discussing the patient's dynamics with my fellow students. In a moment of heightened self-reflection, I became acutely aware that this reason for my tardiness was much less emphasized as I explained my actions to my wife. My mind had effortlessly accessed the traffic reason and just had effortlessly suppressed the reason that was significantly less justifiable, at least as far as my wife was concerned at the moment. It was upon reflecting on my own justifications and how they were selected that the broad generalization dawned on me. The patient was not the only individual whose "justification system" for why she was the way she was could be understood as arising out of her developmental history and social context.

I came to see processes of justification as being ubiquitous in human affairs. Arguments, debates, moral dictates, rationalizations, and excuses, as well as many of the more core beliefs about the self, all involve the process of explaining why one's claims, thoughts, or actions are warranted. In virtually every form of social exchange, from blogging to warfare to politics to family struggles to science, humans are constantly justifying their behaviors to themselves and to others. Moreover, it was not only that one sees the process of justification everywhere one looks in human affairs that made the idea so intriguing. It became clear upon reflection that the process is a uniquely human phenomenon. Other animals communicate, struggle for dominance, and form alliances. But they don't justify why they do what they do. Indeed, if I had to boil the uniqueness of human nature down to one word, it would be justification. We are the justifying animal.

The JH became an obsession for me because the idea seemed to cut across many different areas of thought. It was obviously congruent with basic insights from a psychodynamic perspective. It was also clearly consistent with many of the foremost concerns of the humanists. For example, Roger's argument that much psychopathology can be understood as a split between the social self and the true self could be easily understood through the lens of the JH. Consider how a judgmental, powerful other might force particular justifications in a manner that produces intrapsychic rifts between how a person "really" feels and how they must say they feel. The JH is also directly consistent with cognitive psychotherapy, which can be readily interpreted as a systematic approach to identifying and testing one's justification system. But the idea also pulled in psychological science. Cognitive dissonance , the self-serving bias , human reasoning biases, and the "interpreter function" of the left hemisphere all were readily accountable by the formulation of the JH. The JH also seamlessly incorporated insights from those who emphasize cultural levels of analysis.

unified hypothesis definition

The Tree of Knowledge System: The Second Key Insight

By clearly delineating the dimension of human behavior from the behavior from other animals, a fascinating new formulation began to emerge, which I called the Tree of Knowledge System and depicted in the following graphic.

unified hypothesis definition

The ToK System offers a vision emergent evolution as consisting of one level of pure information (Energy) and four levels or dimensions of complexity (Matter, Life, Mind, and Culture) that correspond to the behavior of four classes of objects (material objects, organisms, animals, and humans), and four classes of science (physical, biological, psychological, and social).

Another key element of the system, is that each of the four dimensions is associated with a theoretical joint point that provides the causal explanatory framework for its emergence. As explained in the prior post, the modern evolutionary synthesis is the theoretical merger of Darwin's theory of natural selection and genetics , and provides for the conceptual unification of biology. Biology is a unified discipline precisely because it has a clear, well-established definition (the science of Life), an agreed upon subject matter (organisms), and a theoretical system that provides the causal explanatory framework for its emergence (natural selection operating on genetic combinations across the generations). It is this crisp conceptual organization that leaves scientifically minded psychologists with feelings of bio- envy .

unified hypothesis definition

If the modern evolutionary synthesis represents the Matter-to-Life joint point, what about the Life-to-Mind and Mind-to-Culture joint-points? Here is where the unified theory does its best work. It shows that Skinner's ideas can be combined with cognitive neuroscience to provide the framework for the Life-to-Mind joint-point. This idea is called Behavioral Investment Theory. And the Justification Hypothesis connects Freud 's key observations with modern social and cognitive psychology to provide the framework for the Mind-to-Culture joint-point. Together, these two theoretical joint-points "box in" psychology and provide a unified theoretical framework for the field.

Not only does the system provide a way to theoretically integrate perspectives that have been very disparate, it also provides a powerful new tool in carving out the proper conception of the field at large. Consider that even a preliminary analysis corresponding the ToK System to the varying conceptions of the discipline suggests that the idea of what psychology is about has historically spanned two fundamentally separate problems: (1) the problem of animal behavior in general (Mind on the ToK System), and (2) the problem of human behavior at the individual level (Culture). Through the meta-level view afforded by the ToK System, we can now see that previous efforts to define the field have failed in part because they have attempted to force one solution onto a problem that consists of two fundamentally distinct dimensions.

unified hypothesis definition

I teach my students that the science of psychology should be divided into two large scientific domains of (1) basic psychology and (2) human psychology. Basic psychology is defined as the science of Mind (mental behavior) and corresponds to the behavior of animals. Human psychology is considered to be a unique subset of psychological formalism that deals with human behavior at the level of the individual. Because human behavior is immersed in the larger socio-cultural context (dimension four in the ToK System), human psychology is considered a hybrid discipline that merges the pure science of psychology with the social sciences. The crisp boundary system that I am proposing is in contrast to others who have conceived of the science of psychology as existing in a vague, amorphous space between biology and the social sciences.

The "critic" in the post that started this series claimed that psychology could not be conceptually unified because there are too many political forces that pull it apart. However, what many psychologists are now beginning to see is that with the right map, we can, in fact, rise above the political forces, and move toward a more coherent, accurate, integrative, and healthy vision of what the field is all about.

Gregg Henriques Ph.D.

Gregg Henriques, Ph.D. , is a professor of psychology at James Madison University.

  • Find a Therapist
  • Find a Treatment Center
  • Find a Psychiatrist
  • Find a Support Group
  • Find Online Therapy
  • United States
  • Brooklyn, NY
  • Chicago, IL
  • Houston, TX
  • Los Angeles, CA
  • New York, NY
  • Portland, OR
  • San Diego, CA
  • San Francisco, CA
  • Seattle, WA
  • Washington, DC
  • Asperger's
  • Bipolar Disorder
  • Chronic Pain
  • Eating Disorders
  • Passive Aggression
  • Personality
  • Goal Setting
  • Positive Psychology
  • Stopping Smoking
  • Low Sexual Desire
  • Relationships
  • Child Development
  • Self Tests NEW
  • Therapy Center
  • Diagnosis Dictionary
  • Types of Therapy

July 2024 magazine cover

Sticking up for yourself is no easy task. But there are concrete skills you can use to hone your assertiveness and advocate for yourself.

  • Emotional Intelligence
  • Gaslighting
  • Affective Forecasting
  • Neuroscience

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

The Unity of Science

The topic of unity in the sciences can be explored through questions such as the following: Is unity a feature of reality or of our modes of cognition? Is there one privileged, most basic or fundamental concept or kind of thing and, if not, how are the different concepts or kinds of things in the universe related? Can the various natural sciences (e.g., physics, astronomy, chemistry, biology) be unified into a single overarching theory, and can theories within a single science (e.g., general relativity and quantum theory in physics, or models of evolution and development in biology) be unified? How are the so-called human sciences related to the natural ones? Are theories or models the relevant connected units? What other connected or connecting units are there? Does the unification of these parts of science involve only matters of fact or are matters of value involved as well? What about matters of method, material, institutional, ethical and other aspects of intellectual cooperation? Moreover, what kinds of unity, not just units, in the sciences are there? And is the relation of unification one of reduction, translation, explanation, logical inference, collaboration or something else? What roles can unification play in scientific practices, their development, application and evaluation? Are those expressions of more general cognitive activities? How are all these questions to be investigated? Is unification an aim of inquiry or a guiding idea at the service of ulterior aims? Is it a question for philosophy, e.g., for metaphysics or epistemology? If so, how can examining scientific practices help? Or is philosophy, rather, a resource for understanding science?

1.1 From Greek thought to Western science

1.2 rationalism and enlightenment, 1.3 german tradition since kant, 1.4 positivism and logical empiricism, 2. varieties of unity, 3.1 reduction, 3.2 antireductionism.

  • 3.3 Epistemic roles: from demarcation to explanation and evidence. Varieties of connective unity.

4.1 Ontological unities and reduction

4.2 ontological unities and antireductionism, 5.1 the stanford school, 5.2 pluralism, 5.3 metapluralism, 6. conclusion: why unity and what difference does it really make, other internet resources, related entries, 1. historical development in philosophy and science from greek philosophy to logical empiricism in america.

Unity has a history as well as a logic. Different formulations and debates express intellectual and other resources and interests in different contexts. Questions about unity belong partly in a tradition of thought that can be traced back to pre-Socratic Greek cosmology, in particular to the preoccupation with the question of the One and the Many. In what senses is the world and, as a result, our knowledge of it one? A number of representations of the world in terms of a few simple constituents that were considered fundamental emerged: Parmenides’ static substance, Heraclitus’ flux of becoming, Empedocles’ four elements, Democritus’ atoms, Pythagoras’ numbers, Plato’s forms, and Aristotle’s categories. The underlying question of the unity of our types of knowledge was explicitly addressed by Plato in the Sophist as follows: “Knowledge also is surely one, but each part of it that commands a certain field is marked off and given a special name proper to itself. Hence language recognizes many arts and many forms of knowledge” ( Sophist , 257c). Aristotle asserted in On the Heavens that knowledge concerns what is primary, and different “sciences” know different kinds of causes; it is metaphysics that comes to provide knowledge of the underlying kind.

With the advent and expansion of Christian monotheism, the organization of knowledge reflected the idea of a world governed by the laws dictated by God, its creator and legislator. From this tradition emerged encyclopedic efforts such as the Etymologies , compiled in the sixth century by the Andalusian Isidore, Bishop of Seville, the works of the Catalan Ramon Llull in the Middle Ages and those of the Frenchman Petrus Ramus in the Renaissance. Llull introduced iconic tree-diagrams and forest-encyclopedias representing the organization of different disciplines including law, medicine, theology and logic. He also introduced more abstract diagrams—not unlike some found in Cabbalistic and esoteric traditions—in an attempt to combinatorially encode the knowledge of God’s creation in a universal language of basic symbols. Their combination would be expected to generate knowledge of the secrets of creation and help articulate knowledge of universal order ( mathesis universalis ), which would, in turn, facilitate communication with different cultures and their conversion to Christianity. Ramus introduced diagrams representing dichotomies and gave prominence to the view that the starting point of all philosophy is the classification of the arts and sciences. The encyclopedia organization of knowledge served the project of its preservation and communication.

The emergence of a distinctive tradition of scientific thought addressed the question of unity through the designation of a privileged method, which involved a privileged language and set of concepts. Formally, at least, it was modeled after the Euclidean ideal of a system of geometry. In the late-sixteenth century, Francis Bacon held that one unity of the sciences was the result of our organization of records of discovered material facts in the form of a pyramid with different levels of generalities. These could be classified in turn according to disciplines linked to human faculties. Concomitantly, the controlled interaction with phenomena of study characterized so-called experimental philosophy. In accordance with at least three traditions—the Pythagorean tradition, the Bible’s dictum in the Book of Wisdom and the Italian commercial tradition of bookkeeping—Galileo proclaimed at the turn of the seventeenth century that the Book of Nature had been written by God in the language of mathematical symbols and geometrical truths, and in it, the story of Nature’s laws was told in terms of a reduced set of objective, quantitative primary qualities: extension, quantity of matter and motion. A persisting rhetorical role for some form of theological unity of creation should not be neglected when considering pre-twentieth-century attempts to account for the possibility and desirability of some form of scientific knowledge. Throughout the seventeenth century, mechanical philosophy and Descartes’ and Newton’s systematization from basic concepts and first laws of mechanics became the most promising framework for the unification of natural philosophy. After the demise of Laplacian molecular physics in the first half of the nineteenth century, this role was taken over by ether mechanics and, unifying forces and matter, energy physics.

Descartes and Leibniz gave this tradition a rationalist twist that was centered on the powers of human reason and the ideal of system of knowledge founded on rational principles. It became the project of a universal framework of exact categories and ideas, a mathesis universalis (Garber 1992; Gaukroger 2002). Adapting the scholastic image of knowledge, Descartes proposed an image of a tree in which metaphysics is depicted by the roots, physics by the trunk, and the branches depict mechanics, medicine and morals. Leibniz proposed a general science in the form of a demonstrative encyclopedia . This would be based on a “catalogue of simple thoughts” and an algebraic language of symbols, characteristica universalis , which would render all knowledge demonstrative and allow disputes to be resolved by precise calculation. Both defended the program of founding much of physics on metaphysics and ideas from life science (Smith 2011) (Leibniz’s unifying ambitions with symbolic language and physics extended beyond science, to settle religious and political fractures in Europe). By contrast, while sharing a model of a geometric, axiomatic structure of knowledge, Newton’s project of natural philosophy was meant to be autonomous from a system of philosophy and, in the new context, still endorsed, for its model of organization and its empirical reasoning, values of formal synthesis and ontological simplicity (see the entry on Newton; also Janiak 2008).

Belief in the unity of science or knowledge, along with the universality of rationality, was at its strongest during the European Enlightenment. The most important expression of the encyclopedic tradition came in the mid-eighteenth century from Diderot and D’Alembert, editors of the Encyclopédie, ou dictionnaire raisonné des sciences, des arts et des métiers (1751–1772). Following earlier classifications by Nichols and Bacon, their diagram presenting the classification of intellectual disciplines was organized in terms of a classification of human faculties. Diderot stressed in his own entry, “Encyclopaedia”, that the word signifies the unification of the sciences. The function of the encyclopedia was to exhibit the unity of human knowledge. Diderot and D’Alembert, in contrast to Leibniz, made classification by subject the primary focus, and introduced cross-references instead of logical connections. The Enlightenment tradition in Germany culminated in Kant’s critical philosophy.

For Kant, one of the functions of philosophy was to determine the precise unifying scope and value of each science. For him, the unity of science is not the reflection of a unity found in nature, or, even less, assumed in a real world behind the apparent phenomena. Rather, it has its foundations in the unifying a priori character or function of concepts, principles and of Reason itself. Nature is precisely our experience of the world under the universal laws that include such concepts. And science, as a system of knowledge, is “a whole of cognition ordered according to principles”, and the principles on which proper science is grounded are a priori (Preface to Metaphysical Foundations of Natural Science ). A devoted but not exclusive follower of Newton’s achievements and insights, he maintained through most of his life that mathematization and a priori universal laws given by the understanding were preconditions for genuine scientific character (like Galileo and Descartes earlier, and Carnap later, Kant believed that mathematical exactness constituted the main condition for the possibility of objectivity). Here Kant emphasized the role of mathematics coordinating a priori cognition and its determined objects of experience. Thus, he contrasted the methods employed by the chemist, a “systematic art” organized by empirical regularities, with those employed by the mathematician or physicist, which were organized by a priori laws, and he held that biology is not reducible to mechanics—as the former involves explanations in terms of final causes (see Critique of Pure Reason , Critique of Judgment and Metaphysical Foundations of Natural Science ). With regards to biology—insufficiently grounded in the fundamental forces of matter—its inclusion requires the introduction of the idea of purposiveness (McLaughlin 1991). More generally, for Kant, unity was a regulative principle of reason, namely, an ideal guiding the process of inquiry toward a complete empirical science with its empirical concepts and principles grounded in the so-called concepts and principles of the understanding that constitute and objectify empirical phenomena. (On systematicity as a distinctive aspect of this ideal and on its origin in reason, see Kitcher 1986 and Hoyningen-Huene 2013).

Kant’s ideas set the frame of reference for discussions of the unification of the sciences in German thought throughout the nineteenth century (Wood and Hahn 2011). He gave philosophical currency to the notion of worldview ( Weltanschauung ) and, indirectly, world-picture ( Weltbild ), establishing among philosophers and scientists the notion of the unity of science as an intellectual ideal. From Kant, German-speaking Philosophers of Nature adopted the image of Nature in terms of interacting forces or powers and developed it in different ways; this image found its way to British natural philosophy. In Great Britain this idealist, unifying spirit (and other notions of an idealist and romantic turn) was articulated in William Whewell’s philosophy of science. Two unifying dimensions are these: his notion of mind-constructed fundamental ideas, which form the basis for organizing axioms and phenomena and classifying sciences, and the argument for the reality of explanatory causes in the form of consilience of induction , wherein a single cause is independently arrived at as the hypothesis explaining different kinds of phenomena.

In face of expanding research, the unifying emphasis on organization, classification and foundation led to exploring differences and rationalizing boundaries. The German intellectual current culminated in the late-nineteenth century in the debates among philosophers such as Windelband, Rickert and Dilthey. In their views and those of similar thinkers, a worldview often included elements of evaluation and life meaning. Kant had established the basis for the famous distinction between the natural sciences ( Naturwissenschaften ) and the cultural, or social, sciences ( Geisteswissesnschaften ) popularized in theory of science by Wilhelm Dilthey and Wilhelm Windelband. Dilthey, Windelband, his student Heinrich Rickert, and Max Weber (although the first two preferred Kulturwissenschaften , which excluded psychology) debated over how differences in subject matter between the two kinds of sciences forced a distinctive difference between their respective methods. Their preoccupation with the historical dimension of the human phenomena, along with the Kantian emphasis on the conceptual basis of knowledge, led to the suggestion that the natural sciences aimed at generalizations about abstract types and properties, whereas the human sciences studied concrete individuals and complexes. The human case suggested a different approach based on valuation and personal understanding (Weber’s verstehen ). For Rickert, individualized concept formation secured knowledge of historical individuals by establishing connections to recognized values (rather than personal valuations). In biology, Ernst Haeckel defended a monistic worldview (Richards 2008).

The Weltbild tradition influenced the physicists Max Planck and Ernst Mach, who engaged in a heated debate about the precise character of the unified scientific world-picture. Mach’s more influential view was both phenomenological and Darwinian: the unification of knowledge took the form of an analysis of ideas into biologically embodied elementary sensations (neutral monism) and was ultimately a matter of adaptive economy of thought. Planck adopted a realist view that took science to gradually approach complete truth about the world, and he fundamentally adopted the thermodynamical principles of energy and entropy (on the Mach-Planck debate see Toulmin 1970). These world-pictures constituted some of the alternatives to a long-standing mechanistic view that, since the rise of mechanistic philosophy with Descartes and Newton, had informed biology as well as most branches of physics. In the background was the perceived conflict between the so-called mechanical and electromagnetic worldviews, which resulted throughout the first two decades of the twentieth century in the work of Albert Einstein (Holton 1998).

In the same German tradition, and amidst the proliferation of work on energy physics and books on unity of science, the German energeticist Wilhelm Ostwald declared the twentieth century the “Monistic century”. During the 1904 World’s Fair in St. Louis, the German psychologist and Harvard professor Hugo Munsterberg organized a congress under the title “Unity of Knowledge”; invited speakers were Ostwald, Ludwig Boltzmann, Ernest Rutherford, Edward Leamington Nichols, Paul Langevin and Henri Poincaré. In 1911, the International Committee of Monism held its first meeting in Hamburg, with Ostwald presiding. [ 1 ] Two years later it published Ostwald’s monograph, Monism as the Goal of Civilization . In 1912, Mach, Felix Klein, David Hilbert, Einstein and others signed a manifesto aiming at the development of a comprehensive worldview. Unification remained a driving scientific ideal. In the same spirit, Mathieu Leclerc du Sablon published his L’Unité de la Science (1919), exploring metaphysical foundations, and Johan Hjorst published The Unity of Science (1921), sketching out a history of philosophical systems and unifying scientific hypotheses.

The German tradition stood in opposition to the prevailing empiricist views that, since the time of Hume, Comte and Mill, held that the moral or social sciences (even philosophy) relied on conceptual and methodological analogies with geometry and the natural sciences, not just astronomy and mechanics, as well as with biology. In the Baconian tradition, Comte emphasized a pyramidal hierarchy of disciplines in his “encyclopedic law” or order, from the most general sciences about the simplest phenomena to the most specific sciences about the most complex phenomena, each depending on knowledge from its more general antecedent: from inorganic physical sciences (arithmetic, geometry, mechanics, astronomy, physics and chemistry) to the organic physical ones, such as biology and the new “social physics”, soon to be renamed sociology (Comte 1830–1842). Mill, instead, pointed to the diversity of methodologies for generating, organizing and justifying associated knowledge with different sciences, natural and human, and the challenges to impose a single standard (Mill 1843, Book VI). He came to view political economy eventually as an art, a tool for reform more than a system of knowledge (Snyder 2006).

Different yet connected currents of nineteenth-century positivism, first in European philosophy in the first half of the century—Auguste Comte, J.S. Mill and Herbert Spencer—and subsequently in North American philosophy—John Fiske, Chauncey Wright and William James—arose out of intellectual tensions between metaphysics and the sciences and identified positivism as synthetic and scientific philosophy; accordingly, they were concerned with the ordering and unification of knowledge through the organization of the sciences. The synthesis was either of methods alone—Mill and Wright—or else also of doctrines—Comte, Spencer and Fiske. Some used the term “system” especially in relation to a common logic or method (Pearce 2015).

In the twentieth century the unity of science became a distinctive theme of the scientific philosophy of logical empiricism (Cat 2021). The question of unity engaged science and philosophy alike. In their manifesto, logical empiricists—known controversially also as logical positivists—and most notably the founding members of the Vienna Circle, adopted the Machian banner of “unity of science without metaphysics”. This was a normative criterion of unity with a role in social reform based on the demarcation between science and metaphysics: a unity of method and language that included all the sciences, natural and social. A common method did not necessarily imply a more substantive unity of content involving theories and their concepts.

A stronger reductive model within the Vienna Circle was recommended by Rudolf Carnap in his The Logical Construction of the World (1928). While embracing the Kantian connotation of the term “constitutive system”, it was inspired by recent formal standards: Hilbert’s axiomatic approach to formulating theories in the exact sciences and Frege’s and Russell’s logical constructions in mathematics. It was also predicated on the formal values of simplicity, rationality, (philosophical) neutrality and objectivity associated with scientific knowledge. In particular, Carnap tried to explicate such notions in terms of a rational reconstruction of science in terms of a method and a structure based on logical constructions out of (1) basic concepts in axiomatic structures and (2) rigorous, reductive logical connections between concepts at different levels.

Different constitutive systems or logical constructions would serve different (normative) purposes: a theory of science and a theory of knowledge. Both foundations raised the issue of the nature and universality of a physicalist language.

One such system of unified science is the theory of science, in which the construction connects concepts and laws of the different sciences at different levels, with physics and its genuine laws as fundamental, lying at the base of the hierarchy. Because of the emphasis on the formal and structural properties of our representations, objectivity, rationality and unity go hand in hand. Carnap’s formal emphasis developed further in Logical Syntax of Language (1934). Alternatively, all scientific concepts could be constituted or constructed in a different system in the protocol language out of classes of elementary complexes of experiences, scientifically understood, representing experiential concepts. Carnap subsequently defended the epistemological and methodological universality of physicalist language and physicalist statements. The unity of science in this context was an epistemological project (for a survey of the epistemological debates, see Uebel 2007; on different strands of the anti-metaphysical normative project of unity see Frost-Arnold 2005).

Whereas Carnap aimed at rational reconstructions, another member of the Vienna Circle, Otto Neurath, favored a more naturalistic and pragmatic approach, with a less idealized and reductive model of unity. His evolving standards of unity were generally motivated by the complexity of empirical reality and the application of empirical knowledge to practical goals. He spoke of an “encyclopedia-model”, opposed to the classic ideal of a pyramidal, reductive “system-model”. The encyclopedia-model took into account the presence within science of ineliminable and imprecise terms from ordinary language and the social sciences and emphasized a unity of language and the local exchanges of scientific tools. Specifically, Neurath stressed the material-thing language called “physicalism”, not to be confounded with the emphasis on the vocabulary of physics. Its motivation was partly epistemological, and Neurath endorsed anti-foundationalism; no unified science, like a boat at sea, would rest on firm foundations. The scientific spirit abhorred dogmatism. This weaker model of unity emphasized empiricism and the normative unity of the natural and the human sciences.

Like Carnap’s unified reconstructions, Neurath’s had pragmatic motivations. Unity without reductionism provided a tool for cooperation and it was motivated by the need for successful treatment—prediction and control—of complex phenomena in the real world that involved properties studied by different theories or sciences (from real forest fires to social policy): unity of science at the point of action. It is an argument from holism, the counterpart of Duhem’s claim that only clusters of hypotheses are confronted with experience. Neurath spoke of a “boat”, a “mosaic”, an “orchestration”, and a “universal jargon”. Following institutions such as the International Committee on Monism and the International Council of Scientific Unions, Neurath spearheaded a movement for Unity of Science in 1934 that encouraged international cooperation among scientists and launched the project of an International Encyclopedia of Unity of Science. It expressed the internationalism of his socialist convictions and the international crisis that would lead to the Second World War (Kamminga and Somsen 2016).

At the end of the Eighth International Congress of Philosophy, held in Prague in September of 1934, Neurath proposed a series of International Congresses for the Unity of Science. These took place in Paris, 1935; Copenhagen, 1936; Paris, 1937; Cambridge, England, 1938; Cambridge, Massachusetts, 1939; and Chicago, 1941. For the organization of the congresses and related activities, Neurath founded the Unity of Science Institute in 1936 (renamed in 1937 as the International Institute for the Unity of Science) alongside the International Foundation for Visual Education, founded in 1933. The Institute’s executive committee was composed of Neurath, Philip Frank and Charles Morris.

After the Second World War, a discussion of unity engaged philosophers and scientists in the Inter-Scientific Discussion Group, first as the Science of Science Discussion Group, in Cambridge, Massachusetts, founded in October 1940 primarily by Philip Frank and Carnap (themselves founders of the Vienna Circle), Quine, Feigl, Bridgman, and the psychologists E. Boring and S.S. Stevens. This would later become the Unity of Science Institute. The group was joined by scientists from different disciplines, from quantum mechanics (Kemble and Van Vleck) and cybernetics (Wiener) to economics (Morgenstern), as part of what was both a self-conscious extension of the Vienna Circle and a reflection of local concerns within a technological culture increasingly dominated by the interest in computers and nuclear power. The characteristic feature of the new view of unity was the ideas of consensus and, subsequently, especially within the USI, cross-fertilization. These ideas were instantiated in the emphasis on scientific operations (operationalism) and the creation of war-boosted cross-disciplines such as cybernetics, computation, electro-acoustics, psycho-acoustics, neutronics, game theory and biophysics (Galison 1998; Hardcastle 2003).

In the late 1960s, Michael Polanyi and Marjorie Grene organized a series of conferences funded by the Ford Foundation on unity of science themes (Grene 1969a, 1969b, 1971). Their general character was interdisciplinary and anti-reductionist. The group was originally called “Study Group on Foundations of Cultural Unity”, but this was later changed to “Study Group on the Unity of Knowledge”. By then, a number of American and international institutions were already promoting interdisciplinary projects in academic areas (Klein 1990). For both Neurath and Polanyi, the organization of knowledge and science, the Republic of Science, was inseparable from ideals of political organization.

Over the last four decades much historical and philosophical scholarship has challenged the ideals of monism and reductionism and pursued a growing critical interest in pluralism and interdisciplinary collaboration (see below). Along the way, the distinction between the historical human sciences and ahistorical natural sciences has received much critical and productive attention. One outcome has been the application and development of concepts and accounts in philosophy of history in understanding different human and natural sciences as historical, with a special focus on the epistemic role of unifying narratives and standards and problems in disciplines such as archeology, geology, cosmology and paleontology (see, for instance, Morgan and Wise 2017; Currie 2019). Another outcome has been a renewed critique of the autonomy of the social sciences. One concern is, for instance, the raising social and economic value of the natural sciences and their influence on the models and methods of the social sciences; another is the influence on both of certain particular political and economic views. Other related concerns are the role of the social sciences in political life and the assumption that humans are unique and superior to animals and emerging technologies (see, for instance, van Bouwel 2009b; Kincaid and van Bouwel 2023).

The historical introductory sections have aimed to show the intellectual centrality, varying formulations, and significance of the concept of unity. The rest of the entry presents a variety of modern themes and views. It will be helpful to introduce a number of broad categories and distinctions that can sort out different kinds of accounts and track some relations between them, as well as additional significant philosophical issues. (The categories are not mutually exclusive, and they sometimes partly overlap. Therefore, while they help label and characterize different positions, they cannot provide a simple, easy and neatly ordered conceptual map.)

Connective unity is a weaker and more general notion than the specific ideal of reductive unity ; this requires asymmetric relations of reduction (see below), which typically rely on assumptions about hierarchies of levels of description and the primacy—conceptual, ontological, epistemological and so on—of a fundamental representation. The category of connective unity helps accommodate and bring attention to the diversity of non-reductive accounts.

Another useful distinction is between synchronic and diachronic unity . Synchronic accounts are ahistorical, assuming no meaningful temporal relations. Diachronic accounts, by contrast, introduce genealogical hypotheses involving asymmetric temporal and causal relations between entities or states of the systems described. Evolutionary models are of this kind; they may be reductive to the extent that the posited original entities are simpler and on a lower level of organization and size. Others simply emphasize connection without overall directionality.

In general, it is useful to distinguish between ontological unity and epistemological unity , even if many accounts bear both characteristics and fall under both rubrics. In some cases, one kind supports the other salient kind in the model. Ontological unity is here broadly understood as involving relations between descriptive conceptual elements; in some cases, the concepts will describe entities, facts, properties or relations, and descriptive models will focus on metaphysical aspects of the unifying connections such as holism, emergence, or downwards causation. Epistemological unity applies to epistemic relations or goals such as explanation. Methodological connections and formal (logical, mathematical, etc.) models may belong in this kind. This article does not draw any strict or explicit distinction between epistemological and methodological dimensions or modes of unity.

Additional possible categories and distinctions include the following: vertical unity or inter-level unity is unity of elements attached to levels of analysis, composition or organization on a hierarchy, whether for a single science or more, whereas horizontal unity or intra-level unity applies to one single level and to its corresponding kind of system (Wimsatt 2007). Global unity is unity of any other variety with a universal quantifier of all kinds of elements, aspects or descriptions associated with individual sciences as a kind of monism , for instance, taxonomical monism about natural kinds, while local unity applies to a subset. (Cartwright has distinguished this same-level global form of reduction, or “imperialism”, in Cartwright 1999; see also Mitchell 2003). Obviously, vertical and horizontal accounts of unity can be either global or local. Finally, the rejection of global unity has been associated with both isolationism —keeping independent competing alternative representations of the same phenomena or systems—as well as with local integration —the local connective unity of the alternative perspectives. A distinction of methodological nature contrasts internal and external perspectives, according to whether the accounts are based naturalistically on the local contingent practices of certain scientific communities at a given time or based on universal metaphysical assumptions broadly motivated (Ruphy 2017). (Ruphy has criticized Cartwright and Dupré for having adopted external metaphysical positions and defended the internal perspective, also present in the program of the so-called Minnesota School, i.e., Kellert et al. 2006.)

3. Epistemological unities

The project of unity, as mentioned above, has long guided models of scientific understanding by privileging descriptions of more fundamental entities and phenomena such as the powers and behaviors of atoms, molecules and machines. Since at least the 1920s and until the 1960s, unity was understood in terms of formal approaches to theories, of semantic relations between vocabularies and of logical relations between linguistic statements in those vocabularies. The ideal of unity was formulated, accordingly, in terms of reduction relations to more fundamental terms and statements. Different accounts have since stood and fallen with any such commitments. Also, note that identifying relations of reduction must be distinguished from the ideal of reductionism. Reductionism is the adoption of reduction relations as the global ideal or standard of a proper unified structure of scientific knowledge; and distance from that ideal was considered a measure of its unifying progress.

In general, reduction reconciles ontological and epistemological considerations of identity and difference. Claims such as “mental states are reduceable to neurochemical states”, “chemical differences reduce to differences between atomic structures” and “optics can be reduced to electromagnetism” imply the truth of identity statements—a strong ontological reduction. However, the relations of reduction between entities or properties in such claims also get additional epistemic value from the expressed semantic diversity of conceptual content and the asymmetry of the relation described in such claims between the reducing and the reduced claims—concepts, laws, theories, etc. (Riel 2014).

Elimination and eliminativism are the extreme forms of reduction and reductionism that commit to aligning the epistemic content with the ontological content fixed by the identity statements: since only x exists and claims about x best meet scientific goals, only talk of x merits scientific consideration and use.

Two formulations of unification in the logical positivist tradition of the ideal logical structure of science placed the question of unity at the core of philosophy of science: Carl Hempel’s deductive-nomological model of explanation and Ernst Nagel’s model of reduction. Both are fundamentally epistemological models, and both are specifically explanatory, at least in the sense that explanation serves unification. The emphasis on language and logical structure makes explanatory reduction a form of unity of the synchronic kind. Still, Nagel’s model of reduction is a model of scientific structure and explanation as well as of scientific progress. It is based on the problem of relating different theories as different sets of theoretical predicates (Nagel 1961).

Reduction requires two conditions: connectability and derivability . Connectability of laws of different theories requires meaning invariance in the form of extensional equivalence between descriptions, with bridge principles between coextensive but distinct terms in different theories.

Nagel’s account distinguishes two kinds of reductions: homogenous and heterogeneous . When both sets of terms overlap, the reduction is homogeneous. When the related terms are different, the reduction is heterogeneous. Derivability requires a deductive relation between the laws involved. In the quantitative sciences, the derivation often involves taking a limit. In this sense the reduced science is considered an approximation to the reducing new one.

Neo-Nagelian accounts have attempted to solve Nagel’s problem of reduction between putatively incompatible theories. The following paragraphs list a few.

Nagel’s two-term relation account has been modified by weaker conditions of analogy and a role for conventions, requiring it to be satisfied not necessarily by the two original theories, \(T_1\) and \(T_2\), which are respectively new and old and more and less general, but by the modified theories \(T'_1\) and \(T'_2\). Explanatory reduction is strictly a four-term relation in which \(T'_1\) is “strongly analogous” to \(T_1\) and corrects, with the insight that the more fundamental theory can offer, the older theory, \(T_2\), changing it to \(T'_2\). Nagel’s account also requires that bridge laws be synthetic identities, in the sense that they be factual, empirically discoverable and testable; in weaker accounts, admissible bridge laws may include elements of convention (Schaffner 1967; Sarkar 1998). The difficulty lay especially with the task of specifying or giving a non-contextual, transitive account of the relations between \(T\) and \(T'\) (Wimsatt 1976).

An alternative set of semantic and syntactic conditions of reduction bears counterfactual interpretations. For instance, syntactic conditions may take the form of limit relations (e.g., classical mechanics reduces to relativistic mechanics for the value of the speed of light c 0 and to quantum mechanics for the value of the Planck constant h 0) and ceteris paribus assumptions (e.g., when optical laws can be identified with results of the theory of electromagnetism); then, each condition helps explain why the reduced theory works where it does and fails where it does not (Glymour 1969).

A different approach to reductionism acknowledges a commitment to providing explanation but rejects the value of a focus on the role of laws. This approach typically draws a distinction between hard sciences such as physics and chemistry and special sciences such as biology and the social sciences. It claims that laws that are in a sense operative in the hard sciences are not available in the special ones, or play a more limited and weaker role, and this on account of historical character, complexity or reduced scope. The rejection of empirical laws in biology, for instance, has been argued on grounds of historical dependence on contingent initial conditions (Beatty 1995), and as matter of supervenience (see the entry on supervenience ) of spatio-temporally restricted functional claims on lower-level molecular ones, and the multiple realization (see the entry on multiple realizability ) of the former by the latter (Rosenberg 1994; Rosenberg’s argument from supervenience to reduction without laws must be contrasted with Fodor’s physicalism about the special sciences about laws without reduction (see below and the entry on physicalism ); for a criticism of these views see Sober 1996). This non-Nagelian approach assumes further that explanation rests on identities between predicates and deductive derivations (reduction and explanation might be said to be justified by derivations, but not constituted by them; see Spector 1978). Explanation is provided by lower-level mechanisms; their explanatory role is to replace final why-necessarily questions (functional) with proximate how-possibly questions (molecular).

One suggestion to make sense of the possibility of the supervening functional explanations without Nagelian reduction may rely on an ontological type of relation of reduction such as the composition of powers in explanatory mechanisms (Gillette 2010, 2016). The reductive commitment to the lower level is based on relations of composition, at play in epistemological analysis and metaphysical synthesis, but is merely formal and derivational. We may be able to infer what composes the higher level, but we cannot simply get all the relevant knowledge of the higher level from our knowledge of the lower level (see also Auyang 1998). This kind of proposal points to epistemic anti-reductionism.

Note that significant proposals of reductive unity rely on ontological assumptions about some hierarchy of levels organizing reality and scientific theories of it (more below).

A more general characterization views reductionism as a research strategy. On this methodological view reductionism can be characterized by a set of so-called heuristics (non-algorithmic, efficient, error-based, purpose-oriented, problem-solving tasks) (Wimsatt 2006): heuristics of conceptualization (e.g., descriptive localization of properties, system-environment interface determinism, level and entity-dependence), heuristics of model-building and theory construction (e.g., model intra-systemic localization with emphasis of structural properties over functional ones, contextual simplification and external generalization) and heuristics of observation and experimental design (e.g., focused observation, environmental control, local scope of testing, abstract shared properties, behavioral regularity and context-independence of results).

Since the 1930s, the focus was on a syntactic approach, with physics as the paradigm of science, deductive logical relations as the form of cognitive or epistemic goals such as explanation and prediction, and theory and empirical laws as paradigmatic units of scientific knowledge (Suppe 1977; Grünbaum and Salmon 1988). The historicist turn in the 1960s, the semantic turn in philosophy of science in the 1970s and a renewed interest in special sciences has changed this focus. The very structure of hierarchy of levels has lost its credibility, even for those who believe in it as a model of autonomy of levels rather than as an image of fundamentalism. The rejection of such models and their emendations have occupied the last four decades of philosophical discussion about unity in and of the sciences (especially in connection to psychology and biology, and more recently chemistry). A valuable consequence has been the strengthening of philosophical projects and communities devoting more sustained and sophisticated attention to special sciences, different from physics.

The first target of antireductionist attacks has been Nagel’s demand of extensional equivalence. It has been dismissed as an inadequate demand of “meaning invariance” and approximation, and with it the possibility of deductive connections. Mocking the positivist legacy of progress through unity, empiricism and anti-dogmatism, these constraints have been decried as intellectually dogmatic, conceptually weak and methodologically overly restrictive (Feyerabend 1962). The emphasis is placed, instead, on the merits of the new theses of incommensurability and methodological pluralism.

A similar criticism of reduction involves a different move: that the deductive connection be guaranteed provided that the old, reduced theory was “corrected” beforehand (Shaffner 1967). The evolution and the structure of scientific knowledge could be neatly captured, using Schaffner’s expression, by “layer-cake reduction”. The terms “length” and “mass”—or the symbols \(l\) and \(m\)—for instance, may be the same in Newtonian and relativistic mechanics, or the term “electron” the same in classical physics and quantum mechanics, or the term “atom” the same in quantum mechanics and in chemistry, or “gene” in Mendelian genetics and molecular genetics (see, for instance, Kitcher 1984). But the corresponding concepts, it is argued, are not. Concepts or words are to be understood as getting their content or meaning within a holistic or organic structure, even if the organized wholes are the theories that include them. From this point of view, different wholes, whether theories or Kuhnian paradigms, manifest degrees of conceptual incommensurability. As a result, the derived, reducing theories typically are not the allegedly reduced, older ones; and their derivation sheds no relevant insight into the relation between the original, older one and the new (Feyerabend 1962; Sklar 1967).

From a historical standpoint, the positivist model collapsed the distinction between synchronic and diachronic reduction, that is, between reductive models of the structure and the evolution, or succession, of scientific theories. By contrast, historicism, as embraced by Kuhn and Feyerabend, drove a wedge between the two dimensions and rejected the linear model of scientific change in terms of accumulation and replacement. For Kuhn, replacement becomes partly continuous, partly non-cumulative change in which one world—or, less literally, one world-picture, one paradigm—replaces another (after a revolutionary episode of crisis and proliferation of alternative contenders) (Kuhn 1962). This image constitutes a form of pluralism , and, like the reductionism it is meant to replace, it can be either synchronic or diachronic . Here is where Kuhn and Feyerabend parted ways. For Kuhn, synchronic pluralism only describes the situation of crisis and revolution between paradigms. For Feyerabend, history is less monistic, and pluralism is and should remain a synchronic and diachronic feature of science and culture (Feyerabend, here, thought science and society inseparable, and followed Mill’s philosophy of liberal individualism and democracy).

A different kind of antireductionism addresses a more conceptual dimension, the problem of categorial reduction : Meta-theoretical categories of description and interpretation for mathematical formalisms, e.g., criteria of causality, may block full reduction. Basic interpretative concepts that are not just variables in a theory or model are not reducible to counterparts in fundamental descriptions (Cat 2000; the case of individuality in quantum physics has been discussed in Healey 1991; Redhead and Teller 1991; Auyang 1995; and, in psychology, in Block 2003).

3.3 Epistemic roles: From demarcation to explanation and evidence. Varieties of connective unity. Aesthetic value

Unity has been considered an epistemic virtue and goal, with different modes of unification associated with roles such as demarcation, explanation and evidence. It can also be traced to synthetic cognitive tasks of categorization and reasoning—also applied in the sciences—especially through relations of similarity and difference (Cat 2022b).

Demarcation . Certain models of unity, which we may call container models, attempt to demarcate science from non-science. The criteria adopted are typically methodological and normative, not descriptive. Unlike connective models, they serve a dual function of drawing up and policing a boundary that (1) encloses and endorses the sciences and (2) excludes other practices. As noted above, some demarcation projects have aimed to distinguish between natural and special sciences. The more notorious ones, however, have aimed to exclude practices and doctrines dismissed under the labels of metaphysics, pseudo-science or popular knowledge. Empirical or not, the applications of standards of epistemic purity are not merely identification or labeling exercises for the sake of carving out scientific inquiry as a natural kind or mapping out intellectual landscapes. The purpose is to establish authority and the stakes involve educational, legal and financial interests. Recent controversies include not just the teaching of creation science, but also polemics over the scientific status of, for instance, homeopathy, vaccination and models of plant neurology and climate change.

The most influential demarcation criterion has been Popper’s original anti-metaphysics barrier: the condition of empirical falsifiability of scientific statements. It required the logically possible relation to basic statements, linked to experience, that can prove general hypotheses to be false with certainty. For this purpose, he defended the application of a particular deductive argument, the modus tollens (Popper 1935/1951). Another demarcation criterion is explanatory unity, empirically grounded. Hempel’s deductive-nomological model characterizes the scientific explanation of events as a logical argument that expresses their expectability in terms of their subsumption under an empirically testable generalization. Explanations in the historical sciences too must fit the model if they are to count as scientific. They could then be brought into the fold as bona fide scientific explanations even if they could qualify only as explanation sketches.

Since their introduction, Hempel’s model and its weaker versions have been challenged as neither generally applicable nor appropriate. The demarcation criterion of unity is undermined by criteria of demarcation between natural and historical sciences. For instance, historical explanations have a genealogical or narrative form, or else they require the historian’s engaging problems or issuing a conceptual judgment that brings together meaningfully a set of historical facts (recent versions of such decades-old arguments are in Cleland 2002, Koster 2009, Wise 2011). According to more radical views, natural sciences such as geology and biology are historical in their contextual, causal and narrative forms; as well, Hempel’s model, especially the requirement of empirically testable strict universal laws, is satisfied by neither the physical sciences nor the historical sciences, including archeology and biology (Ereshefsky 1992).

A number of legal decisions have appealed to Popper’s and Hempel’s criteria, adding the epistemic role of peer review, publication and consensus around the sound application of methodological standards. A more recent criterion has sought a different kind of demarcation: it is comparative rather than absolute; it aims to compare science and popular science; it adopts a broader notion of the German tradition of Wissenschaften , that is, roughly of scholarly fields of research that include formal sciences, natural sciences, human sciences and the humanities; and it emphasizes the role of systematicity, with an emphasis on different forms of epistemic connectedness as weak forms of coherence and order (Hoyningen-Huene 2013).

Explanation . Unity has been defended in the wake of authors such as Kant and Whewell as an epistemic criterion of explanation or at least fulfilling an explanatory role. In other words, rather than modeling unification in terms of explanation, explanation is modeled in terms of unification. A number of proposals introduce an explanatory measure in terms of the number of independent explanatory laws or phenomena conjoined in a theoretical structure. On this representation, unity contributes understanding and confirmation from the fewest basic kinds of phenomena, regardless of explanatory power in terms of derivation or argument patterns (Friedman 1974; Kitcher 1981; Kitcher 1989; Wayne 1996; within a probabilistic framework, Myrvold 2003; Sober 2003; Roche and Sober 2017; see below).

A weaker position argues that unification is not explanation on the grounds that unification is simply systematization of old beliefs and operates as a criterion of theory-choice (Halonen and Hintikka 1999).

The unification account of explanation has been defended within a more detailed cognitive and pragmatist approach. The key is to think of explanations as question–answer episodes involving four elements: the explanation-seeking question about \(P, P\)?, the cognitive state \(C\) of the questioner/agent for whom \(P\) calls for explanation, the answer \(A\), and the cognitive state \(C+A\) in which the need for explanation of \(P\) has disappeared. A related account models unity in the cognitive state in terms of the comparative increase of coherence and elimination of spurious unity—such as circularity or redundancy (Schurz 1999). Unification is also based on information-theoretic transfer or inference relations. Unification of hypotheses is only a virtue if it unifies data. The last two conditions imply that unification yields also empirical confirmation. Explanations are global increases in unification in the cognitive state of the cognitive agent (Schurz 1999; Schurz and Lambert 1994).

The unification–explanation link can be defended on the grounds that laws make unifying similarity expectable (hence Hempel-explanatory), and this similarity becomes the content of a new belief (Weber and Van Dyck 2002 contra Halonen and Hintikka 1999). Unification is not the mere systematization of old beliefs. Contra Schurz, they argue that scientific explanation is provided by novel understanding of facts and the satisfaction of our curiosity (Weber and Van Dyck 2002 contra Schurz 1999). In this sense, causal explanations, for instance, are genuinely explanatory and do not require an increase of unification.

A contextualist and pluralist account argues that understanding is a legitimate aim of science that is pragmatic and not necessarily formal, or a subjective psychological by-product of explanation (De Regt and Dieks 2005). In this view explanatory understanding is variable and can have diverse forms, such as causal-mechanical and unification, without conflict (De Regt and Dieks 2005). In the same spirit, Salmon linked unification to the epistemic virtue or goal of explanation and distinguished between unification and causal-mechanical explanation as forms of scientific explanatory understanding (Salmon 1998).

Explanation may also provide unification of different fields of research through explanatory dependence so that one uses the other’s results to explain one’s own (Kincaid 1997).

The views on scientific explanation have evolved away from the formal and cognitive accounts of the epistemic categories. Accordingly, the source of understanding provided by scientific explanations has been misidentified according to some (Barnes 1992). The genuine source for important, but not all, cases lies in causal explanation or causal mechanism (Cartwright 1983; Cartwright 1989; see also Glennan 1996; Craver 2007). Mechanistic models of explanation have become entrenched in philosophical accounts of the life sciences (Darden 2006; Craven 2007). As an epistemic virtue, the role of unification has been traced to the causal form of the explanation, for instance, in statistical regularities (Schurz 2015). The challenge extends to the alleged extensional link between explanation on the one hand, and truth and universality on the other (Cartwright 1983; Dupré 1993; Woodward 2003). In this sense, explanatory unity, which rests on metaphysical assumptions about components and their properties, also involves a form of ontological or metaphysical unity. (For a methodological criticism of external, metaphysical perspectives, see Ruphy 2016.)

Similar criticisms extend to the traditionally formalist arguments in physics about fundamental levels; there, unification fails to yield explanation in the formal scheme based on laws and their symmetries. Unification and explanation conflict on the grounds that in biology and physics only causal mechanical explanations answering why-questions yield understanding of the connections that contribute to “true unification” (Morrison 2000; [ 2 ] Morrison’s choice of standard for evaluating the epistemic accounts of unity and explanation and her focus on systematic theoretical connections without reduction has not been without critics, e.g., Wayne 2002; Plutynski 2005; Karaca 2012). [ 3 ]

Methodology . Unity has long been understood as a methodological principle, primarily, but not exclusively, in reductionist versions (e.g., for the case of biology, see Wimsatt 1976, 2006). This is different from the case of unity through methodological prescriptions. One methodological criterion appeals to the epistemic virtues of simplicity or parsimony, whether epistemological or ontological (Sober 2003). As a formal probabilistic principle of curve-fitting or average predictive accuracy, the relevance of unity is objective. Unity plays the role of an empirical background theory.

The methodological role of unification may track scientific progress. Unlike in the case of the explanatory role of unification presented above, this account of progress and interdisciplinarity relies on a unifying role of explanation: as in evo-devo in biology, unification is a process of advancement in which two fields of research are in the process of unification through mutual explanatory relevance, that is, when results in one field are required to pose and address questions in the other, raising explananda and explanations (Nathan 2017).

Heuristic dependence involves one influencing, accommodating, contributing to and addressing the other’s research questions (for examples relating chemistry and physics see Cat and Best 2023).

Evidence . As in the relation of unification to explanation, unification is considered an epistemic criterion of evidence in support of the unified account (for a non-probabilistic account of the relation between unification and confirmation, see Schurz 1999). The resulting evidence and demonstration may be called synthetic evidence and demonstration. Synthetic evidence may be the outcome of synthetic modes of reasoning that rely on assumptions of similarity and difference, for instance in cases of robustness, cross-checking and meta-analysis (Cat 2022b).

Like probabilistic models of explanation, recent formal discussions of unity and coherence within the framework of Bayesianism place unity in evidentiary reasoning (Forster and Sober 1994, sect. 7; Schurz and Lambert 2005 is also a formal model, with an algebraic approach). More generally, the probabilistic framework articulates formal characterizations of unity and introduces its role in evaluations of evidence.

A criterion of unity defended for its epistemic virtue in relation to evidence is simplicity or parsimony (Sober 2013, 2016). Comparatively speaking, simpler hypotheses, models or theories present a higher likelihood of truth, empirical support and accurate prediction. From a methodological standpoint, however, appeals to parsimony might not be sufficient. Moreover, the connection between unity as parsimony and likelihood is not interest-relative, at least in the way that the connection between unity and explanation is (Sober 2003; Forster and Sober 1994; Sober 2013, 2016).

On the Bayesian approach, the rational comparison and acceptance of probabilistic beliefs in the light of empirical data is constrained by Bayes’ Theorem for conditional probabilities (where \(h\) and \(d\) are the hypothesis and the data respectively):

One explicit Bayesian account of unification as an epistemic, methodological virtue, has introduced the following measure of unity: a hypothesis \(h\) unifies phenomena \(p\) and \(q\) to the degree that given \(h, p\) is statistically/probabilistically relevant to (or correlated with) \(q\) (Myrvold 2003; for a probabilistically equivalent measure of unity in Bayesian terms see McGrew 2003; on the equivalence, Schupbach 2005). This measure of unity has been criticized as neither necessary nor sufficient (Lange 2004; Lange’s criticism assumes the unification–explanation link; in a rebuttal, Schupbach 2005 rejects this and other assumptions behind Lange’s criticism). In a recent development, Myrvold argues for mutual information unification, i.e., that hypotheses are said to be supported by their ability to increase the amount of what he calls the mutual information of the set of evidence statements (see Myrvold 2017). The explanatory unification contributed by hypotheses about common causes is an instance of the information condition.

Evidentiary unification may contribute to the unification of different fields of research in the form of evidentiary dependence: Evidentiary dependence involves the appeal to the other’s results in the evaluation of one’s own (Kincaid 1997).

Aesthetic value . Finally, epistemic values of unity may rely on subsidiary considerations of aesthetic value. Nevertheless, consideration of beauty, elegance or harmony may also provide autonomous grounds for adopting or pursuing varieties of unification in terms of simplicity and patterns of order (regularity of specific relations) (McAllister 1996; Glynn 2010; Orrell 2012). Whether aesthetic judgments have any epistemic import depends on metaphysical, cognitive or pragmatic assumptions.

Unification without reduction . Reduction is not the sole standard of unity, and models of unification without reduction have proliferated. In addition, such models introduce new units of analysis. An early influential account centers around the notion of interfield theories (Darden and Maull 1977; Darden 2006). The orthodox central place of theories as the unit of scientific knowledge is replaced by that of fields. Examples of such fields are genetics, biochemistry and cytology. Different levels of organization correspond in this view to different fields: Fields are individuated intellectually by a focal problem, a domain of facts related to the problem, explanatory goals, methods and a vocabulary. Fields import and transform terms and concepts from others. The model is based on the idea that theories and disciplines do not match neat levels of organization within a hierarchy; rather, many of them in their scope and development cut across different such levels. Reduction is a relation between theories within a field, not across fields.

Interdependence and hybridity . In general, the higher-level theories (for instance, cell physiology) and the lower-level theories (for instance, biochemistry) are ontologically and epistemologically interdependent on matters of informational content and evidential relevance; one cannot be developed without the other (Kincaid 1996; Kincaid 1997; Wimsatt 1976; Spector 1977). The interaction between fields (through researchers’ judgments and borrowings) may provide enabling conditions for subsequent interactions. For instance, Maxwell’s adoption of statistical techniques in color research enabled the introduction of similar ideas from social statistics in his research on reductive molecular theories of gases. The reduction, in turn, enabled experimental evidence from chemistry and acoustics; similarly, different chemical and spectroscopic bases for colors provided chemical evidence in color research (Cat 2014).

The emergence and development of hybrid disciplines and theories is another instance of non-reductive cooperation or interaction between sciences. Noted above is the post-war emergence of interdisciplinary areas of research: the so-called hyphenated sciences such as neuro-acoustics, radioastronomy, biophysics, etc. (Klein 1990, Galison 1997) On a smaller scale, in the domain of, for instance, physics, one can find semiclassical models in quantum physics or models developed around phenomena where the limiting reduction relations are singular or catastrophic, such as caustic optics and quantum chaos (Cat 1998; Batterman 2002; Belot 2005). Such semiclassical explanatory models have not found successful quantum substitutes and have placed structural explanations at the heart of the relation between classical and quantum physics (Bokulich 2008). The general form of pervasive cases of emergence has been characterized with the notion of contextual emergence (Bishop and Atmanspacher 2006) where properties, behaviors and their laws on a restricted, lower-level, single-scale domain are necessary but not sufficient for the properties and behaviors of another, e.g., higher-level one, not even of itself. The latter are also determined by contingent contexts (contingent features of the state space of the relevant system). The interstitial formation of more or less stable small-scale syntheses and cross-boundary “alliances” has been common in most sciences since the early twentieth century. Indeed, it is crucial to development in model building and growing empirical relevance in fields ranging anywhere from biochemistry to cell ecology, or from econophysics to thermodynamical cosmology. Similar cases can be found in chemistry and the biomedical sciences.

Conceptual unity . The conceptual dimension of cross-cutting has been developed in connection with the possibility of cross-cutting natural kinds that challenges taxonomical monism. Categories of taxonomy and domains of description are interest-relative, as are rationality and objectivity (Khalidi 1998; his view shares positions and attitudes with Longino 1989; Elgin 1996, 1997). Cross-cutting taxonomic systems, then, are not conceptually inconsistent or inapplicable. Both the interest-relativity and hybridity feature prominently in the context of ontological pluralism (see below).

Another, more general, unifying element of this kind is Holton’s notion of themata . Themata are conceptual values that are a priori yet contingent (both individual and social). They are forming and organizing presuppositions that factor centrally in the evolution of the science and include continuity/discontinuity, harmony, quantification, symmetry, conservation, mechanicism and hierarchy (Holton 1973). Unity of some kind is itself a thematic element. A more complex and comprehensive unit of organized scientific practice is the notion of the various styles of reasoning , such as statistical, analogical modeling, taxonomical, genetic/genealogical or laboratory styles; each is a cluster of epistemic standards, questions, tools, ontology, and self-authenticating or stabilizing protocols (Hacking 1996; see below for the relevance of this account of a priori elements to claims of global disunity—the account shares distinctive features of Kuhn’s notion of paradigm).

Another model of non-reductive unification is historical and diachronic: it emphasizes the genealogical and historical identity of disciplines, which has become complex through interaction. The interaction extends to relations between specific sciences, philosophy and philosophy of science (Hull 1988). Hull has endorsed an image of science as a process, modeling historical unity after a Darwinian-style pattern of evolution (developing an earlier suggestion by Popper). Part of the account is the idea of disciplines as evolutionary historical individuals, which can be revised with the help of more recent ideas of biological individuality: hybrid unity as an external model of unity as integration or coordination of individual disciplines and disciplinary projects, e.g., characterized by a form of occurrence, evolution or development whose tracking and identification involves a conjunction with other disciplines, projects and domains of resources, from within science or outside science. This diachronic perspective can accommodate models of discovery, in which genealogical unity integrates a variety of resources that can be both theoretical and applied, or scientific and non-scientific (an example, from physics, the discovery of superconductivity, can be found in Holton, Chang and Jurkowitz 1996). Some models of unity below provide further examples.

A generalization of the notion of interfield theories is the idea that unity is interconnection : Fields are unified theoretically and practically (Grantham 2004). This is an extension of the original modes of unity or identity that single out individual disciplines. Theoretical unification involves conceptual, ontological and explanatory relations. Practical unification involves heuristic dependence, confirmational dependence and methodological integration. The social dimension of the epistemology of scientific disciplines relies on institutional unity. With regard to disciplines as professions, this kind of unity has rested on institutional arrangements such as professional organizations for self-identification and self-regulation, university mechanisms of growth and reproduction through certification, funding and training, and communication and record through journals.

Many examples of unity without reduction are local rather than global. These are not merely a phase in a global and linear project or tradition of unification (or integration), and they are typically focused on science as a human activity. From that standpoint, unification is typically understood or advocated as a piecemeal description and strategy of collaboration (see Klein 1990 on the distinction between global integration and local interdisciplinarity). Cases are restricted to specific models, phenomena or situations.

Material unity . A more recent approach to the connection between different research areas has focused on a material level of scientific practice, with attention to the use of instruments and other material objects (Galison 1997; Bowker and Star 1999). For instance, the material unity of natural philosophy in the sixteenth and seventeenth centuries relied on the circulation, transformation and application of objects in their concrete and abstract representations (Bertoloni-Meli 2006). The latter correspond to the imaginary systems and their representations, which we call models. The evolution of objects and images across different theories and experiments and their developments in nineteenth-century natural philosophy provide a historical model of scientific development, but the approach is not meant to illustrate reductive materialism, since the same objects and models work and are perceived as vehicles for abstract ideas, institutions, cultures, etc., or are prompted by them. On one view, objects are regarded as elements in so-called trading zones (see below) with shifting meanings in the evolution of twentieth-century physics, such as with the cloud chamber which was first relevant to meteorology and next to particle physics (Galison 1997). Alternatively, material objects have been given the status of boundary objects , which provide the opportunity for experts from different fields to collaborate through their respective understandings of the system in question and their respective goals (Bowker and Star 1999).

Graphic unity . At the concrete perceptual level, recent accounts emphasize the role of visual representations in the sciences and suggest what may be called graphic unification of the sciences. Their cognitive roles, methodological and rhetorical, include establishing and disseminating facts and their so-called virtual witnessing, revealing empirical relations, testing their fit with available patterns of more abstract theoretical relations (theoretical integration), suggesting new ones, aiding in computations, serving as aesthetic devices etc. But these uses are not homogeneous across different sciences and make visible disciplinary differences. We may equally speak of graphic pluralism . The rates in the use of diagrams in research publications appear to vary along the hard–soft axis of pyramidal hierarchy, from physics, chemistry, biology, psychology, economics and sociology, and political science (Smith et al. 2000). The highest use can be found in physics, intuitively identified by the highest degree of hardness understood as consensus, codification, theoretical integration and factual stability, to the highest interpretive and instability of results. Similarly, the same variation occurs among sub-disciplines within each discipline. The kinds of images and their contents also vary across disciplines and within disciplines, ranging from hand-made images of particular specimens to hand-made or mechanically generated images of particulars standing in for types, to schematic images of geometric patterns in space or time, or to abstract diagrams representing quantitative relations. Importantly, graphic tools circulate like other cognitive tools between areas of research that they in turn connect (Galison 1997; Daston and Galison 2007; Lopes 2009; see also Lynch and Woolgar 1990; Baigrie 1996; Jones and Galison 1998; Galison 1997; Cat 2014; Kaiser 2005).

Disciplinary unity and collaboration . The relation between disciplines or fields of research has often been tracked by relations between their respective theories or epistemic products. But disciplines constitute broader and richer units of analysis of connections in the sciences characterized, for instance, by their domain of inquiry, cognitive tools and social structure (Bechtel 1987).

Unification of disciplines, in that sense, has been variously categorized, for instance, as interdisciplinary , multidisciplinary , crossdisciplinary or transdisciplinary (Klein 1990; Graff 2005; Kellert 2008; Repko 2012). It might involve a researcher borrowing from different disciplines or the collaboration of different researchers. Neither modality of connection amounts to a straightforward generalization of, or reduction to, any single discipline, theory, etc. In either case, the strategic development is typically defended for its heuristic problem-solving or innovative powers, as it is prompted by a problem that is considered complex in that it does not arise or cannot be fully treated within the purview of one specific discipline unified or individuated around some potentially non-unique set of elements such as scope of empirical phenomena, rules, standards, techniques, conceptual and material tools, aims, social institutions, etc. Indicators of disciplinary unity may vary (Kuhn 1962; Klein 1990; Kellert 2008). Interdisciplinary research or collaboration creates a new discipline or project, such as interfield research, often leaving the existence of the original ones intact. Multidisciplinary work involves the juxtaposition of the treatments and aims of the different disciplines involved in addressing a common problem. Crossdisciplinary work involves borrowing resources from one discipline to serve the aims of a project in another. Transdisciplinary work is a synthetic creation that encompasses work from different disciplines (Klein 1990; Kellert 2008; Brigandt 2010; Hoffmann, Schmidt and Nersessian 2012; Osbeck et al. 2011; Repko 2012). These different modes of synthesis or connection are not mutually exclusive.

Models of interdisciplinary cooperation and their corresponding outcomes are often described using metaphors of different kinds: cartographic (domains, boundaries, trading zone, etc.), linguistic (pidgin language, communication, translation, etc.), architectural (building blocks, tiles, etc.), socio-political (imperialism, hierarchy, republic, orchestration, negotiation, coordination, cooperation, etc.) or embodied (cross-training). Each selectively highlights and neglects different aspects of scientific practice and properties of scientific products. Cartographic and architectural images, for instance, focus on spatial and static synchronic relations and simply connected, compatible elements. Socio-political and embodied images emphasize activity and non-propositional elements (Kellert 2008 defends the image of cross-training).

In this context, methodological unity often takes the form of borrowing standards and techniques for the application of formal and empirical methods. They range from calculational techniques and tools for theoretical modeling and simulation of phenomena to techniques for modeling of data, using instruments and conducting experiments (e.g., the culture of field experiments and, more recently, randomized control trials across natural and social sciences). A key element of scientific practice, often ignored by philosophical analysis, is expertise. As part of different forms of methodological unity, it is key to the acceptance and successful appropriation of techniques. Recent accounts of multidisciplinary collaboration as a human activity have focused on the dynamics of integrating different kinds of expertise around common systems or goals of research (Collins and Evans 2007; Gorman 2002). The same perspective can accommodate the recent interest in so-called mixed methods, e.g., different forms of integration of quantitative and qualitative methods and approaches in the social sciences (but mixed-method approaches do not typically involve mixed disciplines).

As teamwork and collaboration within and between research units has steadily increased, so has the degree of specialization (Wutchy et al. 2007). Between both trends, new reconfigurations keep forming with different purposes and types of institutional expression and support; for example, STEM, Earth sciences, physical sciences, mind–brain sciences, etc., in addition to hybrid fields mentioned above. Yet, in educational research and funding, disciplinarity has acquired new normative purchase in opposite directions: while funding agencies and universities promote interdisciplinarity, university administrations encourage both disciplinary competition and more versatile adisciplinarity (Griffiths 2022). In the face of defenses of different forms of interdisciplinarity (Graff 2015), there is also a renewed attention to the critical value of autonomy of disciplines, or disciplinarity, as the key resource in division of labor and collaboration alike (Jacobs 2013).

The social epistemology of interdisciplinarity is complex. It develops both top-down from managers and bottom-up from practitioners (Mäki 2016; Mäki and MacLeod 2016), relying on a variety of kinds of interactions with heuristic and normative dimensions (Boyer-Kassem et al. 2018). More generally, collaborative work arguably provides epistemic, ethical and instrumental advantages, e.g., more comprehensive relevant understanding of complex problems, expanded justice to interests of direct knowledge users and increased resources (Laursen et al. 2021). Yet, collaboration relies on a plurality of values or norms that may lead to conflicts such as a paralyzing practical incoherence of guiding values and moral incoherence and problematic forms of oppression (Laursen et al. 2021; see more on the limits of pluralism below).

Empirical work in sociology and cognitive psychology on scientific collaboration has led to a broader perspective, including a number of dimensions of interdisciplinary cooperation involving identification of conflicts and the setting of sufficient so-called common ground integrators. These include shared (pre-existing, revised and newly developed) concepts, terminology, standards, techniques, aims, information, tools, expertise, skills (abstract, dialectical, creative and holistic thinking), cognitive and social ethos (curiosity, tolerance, flexibility, humility, receptivity, reflexivity, honesty, team-play) social interaction, institutional structures and geography (Cummings and Kiesler 2005; Klein 1990; Kockelmans 1979; Repko 2012). Sociological studies of scientific collaboration can in principle place the connective models of unity within the more general scope of social epistemology, for instance, in relation to distributive cognition (beyond the focus on strategies of consensus within communities).

The broad and dynamical approach to processes of interdisciplinary integration may effectively be understood to describe the production of different sorts and degrees of epistemic emergence. The integrated accounts require shared (old or new) assumptions and may involve a case of ontological integration, for instance in causal models. Suggested kinds of interdisciplinary causal-model integration are the following: sequential causal order in a process or mechanism cutting across disciplinary divides; horizontal parallel integration of different causal models of different elements of a complex phenomenon; horizontal joint causal model of the same effect; and vertical or cross-level causal integration (see emergent or top-down causality, below) (Repko 2012; Kockelmans 1979).

The study of the social epistemology of interdisciplinary and collaborative research has been carried out and proposed from different perspectives that illuminate different dimensions and implications. These include historical typology (Klein 1990), formal modeling (Boyer-Kassem et al. 2018), ethnography (Nersessian 2022), ethical, scientometrics and multidimensional philosophical perspectives (Mäki 2016). Other approaches aim to design and offer tools that facilitate collaboration such as the Toolbox Dialogue Initiative (Hubbs et al. 2020). The different forms of plurality and connection also ultimately inform the organization of science in the social and political terms of diversity and democracy (Longino 1998, 2001). In terms of cooperation and coordination, unity, in this sense, cannot be reduced to consensus (Rescher 1993; van Bouwel 2009a; Repko 2012; Hoffmann, Schmidt and Nersessian 2012.)

A general model of local interconnection, which has acquired widespread attention and application in different sciences, is the anthropological model of trading zone , where hybrid languages and meanings are developed that allow for interaction without straightforward extension of any party’s original language or framework (Galison 1997). Galison has applied this kind of anthropological analysis to the subcultures of experimentation. This strategy aims to explain the strength, coherence and continuity of science in terms of local coordinations of intercalated levels of symbolic procedures and meanings, instruments and arguments.

At the experimental level, instruments, as found objects, acquire new meanings, developments and uses as they bridge over the transitions between theories, observations or theory-laden observations. Instruments and experimental projects in the case of Big Science also bring together, synchronically and interactively, the skills, standards and other resources from different communities, and they change each in turn (on interdisciplinary experimentation see also Osbeck et al. 2011). Patterns of laboratory research are shared by the different sciences, including not just instruments but the general strategies of reconfiguration of human researchers and the natural entities researched (Knorr-Cetina 1992). This includes statistical standards (e.g., statistical significance) and ideals of replication. At the same time, attention has been paid to the different ways in which experimental approaches differ among the sciences (Knorr-Cetina 1992; Guala 2005; Weber 2005) as well as to how they have been transferred (e.g., field experiments and randomized control trials) or integrated (e.g., mixed methods combining quantitative and qualitative techniques).

Interdisciplinary research has been claimed to revolve shared boundary objects (Gorman 2002) and to yield evidentiary and heuristic integration through cooperation (Kincaid 1997). Successful interdisciplinary research, however, does not seem to require integration, e.g., evolutionary game theory in economics and biology (Grüne-Yanoff 2016). Heuristic cooperation has also led to new stable disciplines through stronger integrative forms of problem solving. In the life sciences, for instance, computational, mathematical and engineering techniques for modeling and data collection in molecular biology have led to integrative systems biology (MacLeod and Nersessian 2016; Nersessian 2022). Similarly, constraints and resources for problem-solving in physics, chemistry and biology led to the development of quantum chemistry (Gavroglu and Simões 2012; see also the case of nuclear chemistry in Cat and Best 2023).

4. Ontological unities

Since Nagel’s influential model of reduction by derivation, most discussions of the unity of science have been cast in terms of reductions between concepts and the entities they describe, and between theories incorporating the descriptive concepts. Ontological unity is expressed by a preferred set of such ontological units. In this regard, it should be noted that the selection of units typically relies on assumptions about classification or natural order such as commitments to natural kinds or a hierarchy of levels.

In terms of concepts featured in preferred descriptions, explanatory or not, reduction endorses taxonomical monism: a privileged set of fundamental kinds of things. These privileged kinds are often known as so-called natural kinds; although, as it has been argued, monism does not entail realism (Slater 2005) and the notion admits of multiple interpretations, ranging from the more conventionalist to the more essentialist (Tahko 2021; see a critique in Cat 2022a). Natural kindness has further been debated in terms, for instance, of the contrast between so-called cluster and causal theories. Connectedly, pluralism does not entail anti-realism (Dupré 1993), nor does realism entail monism or essentialism (Khalidi 2021). Without additional metaphysical assumptions, the fundamental units are ambiguous with respect to their status as either entity or property. Reduction may determine the fundamental kinds or level through the analysis of entities.

A distinctive ontological model is as follows. The hierarchy of levels of reduction is fixed by part-whole relations. The levels of aggregation of entities run all the way down to atomic particles and field parts, rendering microphysics the fundamental science (Gillett 2016). The focus of recent accounts has been placed on the relation between the causal powers at different levels and how lower-level entities and powers determine higher-level ones. Discussions have centered on whether the relation is of identity between types or tokens of things (Thalos 2013; Gillette 2016; Wilson 2021; see more below) or even if causal powers are the relevant unit of analysis (Thalos 2013).

A classic reference to this compositional type of account is Oppenheim and Putnam’s “The Unity of Science as a Working Hypothesis” (Oppenheim and Putnam 1958; Oppenheim and Hempel had worked in the 1930s on taxonomy and typology, a question of broad intellectual, social and political relevance in Germany at the time). Oppenheim and Putnam intended to articulate an idea of science as a reductive unity of concepts and laws reduced to those of the most elementary elements. They also defended it as an empirical hypothesis—not an a priori ideal, project or precondition—about science. Moreover, they claimed that its evolution manifested a trend in that unified direction out of the smallest entities and lowest levels of aggregation. In an important sense, the evolution of science recapitulates, in the reverse, the evolution of matter, from aggregates of elementary particles to the formation of complex organisms and species (we find a similar assumption in Weinberg’s downward arrow of explanation). Unity, then, is manifested not just in mereological form, but also diachronically, genealogically or historically .

A weaker form of ontological reduction advocated for the biomedical sciences with the causal notion of partial reductions : explanations of localized scope (focused on parts of higher-level systems only) laying out a causal mechanism connecting different levels in the hierarchy of composition and organization (Schaffner 1993; Schaffner 2006; Scerri has similarly discussed degrees of reduction in Scerri 1994). An extensional, domain-relative approach introduces the distinction between “domain preserving” and “domain combining” reductions. Domain-preserving reductions are intra-level reductions and occur between \(T_1\) and its predecessor \(T_2\). In this parlance, however, \(T_2\) “reduces” to \(T_1\). This notion of “reduction” does not refer to any relation of explanation (Nickles 1973).

The claim that reduction, as a relation of explanation, needs to be a relation between theories or even involve any theory has also been challenged. One such challenge focuses on “inter-level” explanations in the form of compositional redescription and causal mechanisms (Wimsatt 1976). The role of biconditionals or even Schaffner-type identities, as factual relations, is heuristic (Wimsatt 1976). The heuristic value extends to the preservation of the higher-level, reduced concepts, especially for cognitive and pragmatic reasons, including reasons of empirical evidence. This amounts to rejecting the structural, formal approach to unity and reductionism favored by the logical-positivist tradition. Reductionism is another example of the functional, purposive nature of scientific practice. The metaphysical view that follows is a pragmatic and non-eliminative realism (Wimsatt 2006). As a heuristic, this kind of non-eliminative pragmatic reductionism is a complex stance. It is, across levels, integrative and intransitive, compositional, mechanistic and functionally localized, approximative and abstractive. It is bound to adopting false idealizations, focusing on regularities and stable common behavior, circumstances and properties. It is also constrained in its rational calculations and methods, tool-binding and problem-relative. The heuristic value of eliminative, inter-level reductions has been defended as well (Poirier 2006).

The appeal to formal laws and deductive relations is dropped for sets of concepts or vocabularies in the replacement analysis (Spector 1978). This approach allows for talk of entity reduction or branch reduction, and even direct theory replacement, without the operation of laws, and it circumvents vexing difficulties raised by bridge principles and the deductive derivability condition (self-reduction, infinite regress, etc.). Formal relations only guarantee, but do not define, the reduction relation. Replacement functions are meta-linguistic statements. As Sellars argued in the case of explanation, this account distinguishes between reduction and the testing for reduction, and it highlights the role of derivations in both. Finally, replacement can be in practice or in theory. Replacement in practice does not advocate elimination of the reduced or replaced entities or concepts (Spector 1978).

As indicated above, reductive models and associated organization of scientific theories and disciplines assume an epistemic hierarchy grounded on an ontological hierarchy of levels of organization of entities, properties or phenomena, from societies down to cells, molecules and subatomic elements. Patterns of behavior of entities at lower, more fundamental levels are considered more general, stable and explanatory.

Levels forming a hierarchy are discrete and stratified so that entities at level n strongly depend on entities at level \(n-1.\) The different levels may be distinguished in different ways: by differences in scale, by a relation of realization and, especially, by a relation of composition such that entities at level n are considered composed of, or decomposable into, entities at level \(n-1.\)

This assumption has been the target of criticism (Thalos 2013; Potochnik 2017). Neither scales, realization nor decomposition can fix an absolute hierarchy of levels. Critics have noted, for instance, that entities may participate in causal interactions at multiple levels and across levels in physical models (Thalos 2013) and across biological and neurobiological models (Haug 2010; Eronen 2013).

In addition, the compartmentalization of theories and their concepts or vocabulary into levels neglects the existence of empirically meaningful and causally explanatory relations between entities or properties at different levels. If they are neglected as theoretical knowledge and left as only bridge principles, the possibility of completeness of knowledge is jeopardized. Maximizing completeness of knowledge requires a descriptive unity of all phenomena at all levels and anything between these levels. Any bounded region or body of knowledge neglecting such cross-boundary interactions is radically incomplete, and not just confirmationally or evidentially so; we may refer to this problem as the problem of cross-boundary incompleteness at either intra-level or horizontal incompleteness and, on a hierarchy, the problem of inter-level or vertical incompleteness (Kincaid 1997; Cat 1998).

If levels cannot track, for instance, same-level causal relations, either their causal explanatory relevance is derivative and contextual (Craver 2007), their role is cognitive and pragmatic as in the role as conceptual coordinate systems or, more radically, altogether dispensable (Thalos 2013; Potochnik 2017). As a result, they fail to fix a corresponding hierarchy of fields and subfields of scientific research defended by reductionist accounts.

The most radical form of reduction as replacement is often called eliminativism . The position has made a considerable impact in philosophy of psychology and philosophy of mind (Churchland 1981; Churchland 1986). On this view the vocabulary of the reducing theories (neurobiology) eliminates and replaces that of the reduced ones (psychology), leaving no substantive relation between them (which is only a replacement rule) (see also eliminative materialism ).

From a semantic standpoint, one may distinguish different kinds of reduction in terms of four criteria, two epistemological and two ontological: fundamentalism, approximation, abstract hierarchy and spatial hierarchy. Fundamentalism implies that the features of a system can be explained in terms only of factors and rules from another realm. Abstract hierarchy is the assumption that the representation of a system involves a hierarchy of levels of organization with the explanatory factors being located at the lower levels. Spatial hierarchy is a special case of abstract hierarchy in which the criterion of hierarchical relation is a spatial part-whole or containment relation. Strong reduction satisfies the three “substantive” criteria, whereas weak reduction only satisfies fundamentalism. Approximate reductions—strong and hierarchical—are those which satisfy the criterion of fundamentalism only approximately (Sarkar 1998; the merit of Sarkar’s proposal resides in its systematic attention to hierarchical conditions and, more originally, to different conditions of approximation ; see also Ramsey 1995; Lange 1995).

The semantic turn extends to more recent notion of models that do not fall under the strict semantic or model-theoretic notion of mathematical structures (Giere 1999; Morgan and Morrison 1999). This is a more flexible framework about relevant formal relations and the scope of relevant empirical situations. It is implicitly or explicitly adopted by most accounts of unity without reduction. One may add the primacy of temporal representation and temporal parts, temporal hierarchy or temporal compositionality , first emphasized by Oppenheim and Putnam as a model of genealogical or diachronic unity. This framework applies to processes both of evolution and development (a more recent version is in McGivern 2008 and in Love and Hütteman 2011).

The shift in the accounts of scientific theory from syntactic to semantic approaches has changed conceptual perspectives and, accordingly, formulations and evaluations of reductive relations and reductionism. However, examples of the semantic approach focusing on mathematical structures and satisfaction of set-theoretic relations have focused on syntactic features—including the axiomatic form of a theory—in the discussion of reduction (Sarkar 1998; da Costa and French 2003). In this sense, the structuralist approach can be construed as a neo-Nagelian account, while an alternative line of research has championed the more traditional structuralist semantic approach (Balzer and Moulines 1996; Moulines 2006; Ruttkamp 2000; Ruttkamp and Heidema 2005).

From the opposite direction, arguments concerning new concepts such as multiple realizability and supervenience , introduced by Putnam, Kim, Fodor and others, have led to talk of higher-level functionalism, a distinction between type-type and token-token reductions and the examination of its implications. The concepts of emergence, supervenience and downward causation are related metaphysical tools for generating and evaluating proposals about unity and reduction in the sciences. This literature has enjoyed its chief sources and developments in general metaphysics and in philosophy of mind and psychology (Davidson 1969; Putnam 1975; Fodor 1975; Kim 1993).

Supervenience , first introduced by Davidson in discussions of mental properties, is the notion that a system with properties on one level is composed of entities on a lower level and that its properties are determined by the properties of the lower-level entities or states. The relation of determination is that no changes at the higher-level occur without changes at the lower level. Like token-reductionism, supervenience has been adopted by many as the poor man’s reductionism (see the entry on supervenience ). A different case for the autonomy of the macrolevel is based on the notion of multiple supervenience (Kincaid 1997; Meyering 2000).

The autonomy of the special sciences from physics has been defended in terms of a distinction between type-physicalism and token-physicalism (Fodor 1974; Fodor countered Oppenheim and Putnam’s hypothesis under the rubric “the disunity of science”; see the entry on physicalism ). The key logical assumption is the type-token distinction, whereby types are realized by more specific tokens, e.g., the type “animal” is instantiated by different species, the type “tiger” or “electron” can be instantiated by multiple individual token tigers and electrons. Type-physicalism is characterized by a type-type identity between the predicates/properties in the laws of the special sciences and those of physics. By contrast, token-physicalism is based on the token-token identity between the predicates/properties of the special sciences and those of physics; every event under a special law falls under a law of physics and bridge laws express contingent token-identities between events. Token-physicalism operates as a demarcation criterion for materialism. Fodor argued that the predicates of the special sciences correspond to infinite or open-ended disjunctions of physical predicates, and these disjunctions do not constitute natural kinds identified by an associated law. Token-physicalism is the only alternative. All special kinds of events are physical, but the special sciences are not physics (for criticisms based on the presuppositions in Fodor’s argument, see Sober 1999).

The denial of remedial, weaker forms of reductionism is the basis for the concept of emergence (Humphreys 1997; Bedau and Humphreys 2008; Wilson 2021). Different accounts have attempted to articulate the idea of a whole being different from or more than the mere sum of its parts (see the entry on emergent properties ). Emergence has been described beyond logical relations, synchronically as an ontological property and diachronically as a material process of fusion, in which the powers of the separate constituents lose their separate existence and effects (Humphreys 1997). This concept has been widely applied in discussions of complexity . Unlike the earliest antireductionist models of complexity in terms of holism and cybernetic properties, more recent approaches track the role of constituent parts (Simon 1996). Weak emergence has been opposed to nominal and strong forms of emergence. The nominal kind simply represents that some macro-properties cannot be properties of micro-constituents. The strong form is based on supervenience and irreducibility, with a role for the occurrence of autonomous downwards causation upon any constituents (see below). Weak emergence is linked to processes stemming from the states and powers of constituents, with a reductive notion of downwards causation of the system as a resultant of constituents’ effects (Wilson 2021); however, the connection is not a matter of Nagelian formal derivation, but of so-called universality classes (Batterman 2002; Thalos 2013), self-organization (Mitchell 2012) and implementation through computational aggregation, compression and iteration. Weak emergence, then, can be defined in terms of simulation: a macro-property, state or fact is weakly emergent if and only if it can be derived from its macro-constituents only by simulation (Bedau and Humphreys 2008) The denial of remedial, weaker forms of reductionism is the basis for the concept of emergence (Humphreys 1997; Bedau 2008); see the entry on simulations in science ).

Computational models of emergence or complexity straddle the boundary between formal epistemology and ontology. They are based on simulations of chaotic dynamical processes such as cellular automata (Wolfram 1984, 2002). Their supposed superiority to combinatorial models based on aggregative functions of parts of wholes does not lack defenders (Crutchfield 1994; Crutchfield and Hanson 1997; Humphreys 2004, 2007, 2008; Humphreys and Huneman 2008; Huneman 2008a, 2008b, 2010).

Connected to the concept of emergence is top-down or downward causation , which captures the autonomous and genuine causal power of higher-level entities or states, especially upon lower-level ones. The most extreme and most controversial version includes a violation of laws that regulate the lower level (Meehl and Sellars 1956; Campbell 1974). Weaker forms require compatibility with the microlaws (for a brief survey and discussion see Robinson 2005; on downward causation without top-down causes, see Craver and Bechtel 2007; Bishop 2012). The very concept has become the subject of some interdisciplinary interest in the sciences (Ellis, Noble and O’Connor 2012).

Another general argument for the autonomy of the macrolevel in the form of non-reductive materialism has been a cognitive type of functionalism, namely, cognitive pragmatism (Van Gulick 1992). This account links ontology to epistemology. It discusses four pragmatic dimensions of representations: the nature of the causal interaction between theory-user and the theory, the nature of the goals to the realization of which the theory can contribute, the role of indexical elements in fixing representational content, and differences in the individuating principles applied by the theory to its types (Wimsatt and Spector’s arguments above are of this kind). A more ontologically substantive account of functional reduction is Ramsey’s bottom-up construction by reduction : transformation reductions streamline formulations of theories in such a way that they extend basic theories upwards by engineering their application to specific context or phenomena. As a consequence, they reveal, by construction, new relations and systems that are antecedently absent from a scientist’s understanding of the theory—independently of a top or reduced theory (Ramsey 1995). A weaker framework of ontological unification is categorial unity , wherein abstract categories such as causality, information, etc., are attached to the interpretation of the specific variables and properties in models of phenomena.

5. Disunity and pluralism

A more radical departure from logical-positivist standards of unity is the recent criticism of the methodological values of reductionism and unification in the sciences and also its position in culture and society. From the descriptive standpoint, many views under the rubric of disunity are versions of positions mentioned above. The difference is mainly normative and a matter of emphasis, scope and perspective. Such views reject global or universal standards of unity—including unity of method—by emphasizing disunity and endorsing different forms of epistemological and ontological pluralism.

An influential picture of disunity comes from related works by members of the so-called Stanford School such as John Dupré, Ian Hacking, Peter Galison, Patrick Suppes and Nancy Cartwright. Disunity is, in general terms, a rejection of universalism and uniformity, both methodological and metaphysical. Through their work, the rubric of disunity has acquired a visibility parallel to the one once acquired by unity, as an inspiring philosophical rallying cry.

From a methodological point of view, members of the school have defended, from analysis of actual scientific practice, a model of local unity such as the so-called trading-zone, (Galison 1998), a plurality of scientific methods (Suppes 1978), a plurality of scientific styles with the function of establishing spaces of epistemic possibility and a plurality of kinds of unities (Hacking 1996; Hacking follows the historian A.A. Crombie; for a criticism of Hacking’s historical epistemology see Kusch 2010).

From a metaphysical point of view, the disunity of science can be given adequate metaphysical foundations that make pluralism compatible with realism (Dupré 1993; Cartwright 1983, 1999). Dupré opposes a mechanistic paradigm of unity characterized by determinism, reductionism and essentialism. The paradigm spreads the values and methods of physics to other sciences that he thinks are scientifically and socially deleterious. Disunity appears characterized by three pluralistic theses: against essentialism—there is always a plurality of classifications of reality into kinds; against reductionism—there exists equal reality and causal efficacy of systems at different levels of description (that is, the microlevel is not causally complete, leaving room for downward causation); and against epistemological monism—there is no single methodology that supports a single criterion of scientificity, nor a universal domain of its applicability, leaving only a plurality of epistemic and non-epistemic virtues. The unitary concept of science should be understood, following the later Wittgenstein, as a family-resemblance concept. (For a criticism of Dupré’s ideas, see Mitchell 2003; Sklar 2003.)

Against the universalism of explanatory laws, Cartwright has argued that laws cannot be both universal and exactly true, as Hempel required in his influential account of explanation and demarcation; there exist only patchworks of laws and local cooperation. Like Dupré, Cartwright adopts a kind of scientific realism but denies that there is a universal order, whether represented by a theory of everything or a corresponding a priori metaphysical principle (Cartwright 1983). Theories apply only locally, where and to the extent that their interpretive models fit the phenomena studied, ceteris paribus (Cartwright 1999). Cartwright’s pluralism is not just opposed to vertical reductionism but also horizontal imperialism, or universalism and globalism. She explains their more or less general domain of application in terms of causal capacities and arrangements she calls nomological machines (Cartwright 1989, 1999). The regularities they bring about depend on a shielded environment. As a matter of empiricism, this is the reason that it is in the controlled environment of laboratories and experiments, where causal interference is shielded off, that factual regularities are manifested. The controlled, stable, regular world is an engineered world. Representation rests on intervention (cf. Hacking 1983; for criticisms see Winsberg et al. 2000; Hoefer 2003; Sklar 2003; Howhy 2003; Teller 2004; McArthur 2006; Ruphy 2016).

Disunity and autonomy of levels have been associated, conversely, with antirealism, meaning instrumentalist or empiricist heuristics. This includes, for Fodor and Rosenberg, higher-level sciences such as biology and sociology (Fodor 1974; Rosenberg 1994; Huneman 2010). It is against this picture that Dupré’s and Cartwright’s attacks on uniformly global unity and reductionism, above, might seem surprising by including an endorsement, in causal terms, of realism. [ 4 ] Rohrlich has defended a similar realist position about weaker, conceptual (cognitive) antireductionism, although on the grounds of the mathematical success of derivational explanatory reductions (Rohrlich 2001). Ruphy, however, has argued that antireductionism merely amounts to a general methodological prescription and is too weak to yield uncontroversial metaphysical lessons; these are in fact based on general metaphysical commitments external to scientific practice (Ruphy 2005, 2016).

Unlike more descriptive accounts of plurality, pluralism is a normative endorsement of plurality. The question of the metaphysical significance of disunity and anti-reductionism takes one straight to the larger issue of the epistemology and metaphysics (and aesthetics, social culture and politics) of pluralism. And here one encounters the familiar issues and notions such as conceptual schemes, frameworks and worldviews, incommensurability, relativism, contextualism and perspectivalism about goals and standards of concepts and methods (for a general discussion see Lynch 1998; on perspectivalism about scientific models see Giere 1999, 2006; Rueger 2005; Massimi and McCoy 2020).

In connection with relativism and instrumentalism, pluralism has typically been associated with antirealism about taxonomical practices. But it has been defended from the standpoint of realism (for instance, Dupré 1993; Chakravartty 2011). Pluralism about knowledge of mind-independent facts can be formulated in terms of different ways to distribute properties (sociability-based pluralism), with more specific commitments about the ontological status of the related elements and their plural contextual manifestations of powers or dispositions (Chakravartty 2011; Cartwright 2007).

From a more epistemological standpoint, pluralism applies widely to concepts, explanations, virtues, goals, methods, models and kinds of representations (see above for graphic pluralism), etc. In this sense, pluralism has been defended as a general framework that rejects the ideal of consensus in cognitive, evaluative and practical matters, against pure skepticism (nothing goes) or indifferentism (anything goes), including a defense of preferential and contextual rationality that notes the role of contextual rational commitments, by analogy with political forms of engagement (Rescher 1993; van Bouwel 2009a; Cat 2012).

Consider at least four distinctions—they are formulated about concepts, facts, and descriptions, and they apply also to values, virtues, methods, etc.:

  • Vertical vs. horizontal pluralism . Vertical pluralism is inter-level pluralism, the view that there is more than one level of factual description or kind of fact and that each is irreducible, equally fundamental, or ontologically/conceptually autonomous. Horizontal pluralism is intra-level pluralism, the view that there may be incompatible descriptions or facts on the same level of discourse (Lynch 1998). For instance, the plurality of explanatory causes to be chosen from or integrated in biology or physics has been defended as a lesson in pluralism (Sober 1999).
  • Global vs. local pluralism . Global pluralism is pluralism about every type of fact or description. Global horizontal pluralism is the view that there may be incompatible descriptions of the same type of fact. Global vertical pluralism is the view that no type of fact or description reduces to any other. Local horizontal and vertical pluralism is about one type of fact or description (Lynch 1998). It may also concern situated standpoints informed by, among others, social differences (Wylie 2015).
  • Difference vs. integrative pluralism . Difference pluralism has been defended in terms of division of labor in the face of complexity and cognitive limitations (Giere 2006), epistemic humility (Feyerabend 1962 and his final writings in the 1990s; Chang 2002), scientific freedom, empirical testability and theory choice (Popper 1935; Feyerabend 1962) and underdetermination.

Underdetermination arguments concern choices from a disjunction of equivalent types of descriptions (Mitchell 2003, 2009) or of incompatible partial representations or models of phenomena in the same intended scope (Longino 2002, 2013). The representational incompatibility may be traced to competing values or aims, or assumptions in ceteris paribus laws.

Critiques of difference pluralism point to different consequences such as failure to address complex and boundary phenomena and problems in, for instance, the life and social sciences (Gorman 2002; Mitchell 2003; 2008); detrimental effects of taxonomic instability (Sullivan 2017) and so-called intraconcept variability (Cunningham 2021) in the mind-brain sciences. When the variability characterizes what are taken to be the same concepts and terminology, several contexts of detrimental effect have been identified: (1) science education, (2) collaborative research, intra- and cross-disciplinary (e.g., the practical and moral problems of value conflicts mentioned above), (3) clinical practice, and (4) metascientific research (Cunningham 2021).

Integrative, or connective, types of pluralism are the conjunctive or holistic requirement of different types of descriptions, methods or perspectives (Mitchell 2003, 2009; contrast with the more isolationist position in Longino 2002 and her essay in Kellert, Longino and Waters 2006; Longino 2013).

To a merely syncretistic or tolerant, non-interactive pluralism, a recent body of literature has opposed a dynamic, coordinated, interactive disciplinary kind of pluralism (Chang 2012; Wylie 2015; Sullivan 2017). The former requires only respectful division of labor; the latter may involve either a limited cross-fertilization through communication and assimilation—borrowing or co-optation—or a more robust, integrative epistemic engagement. The latter may involve, for instance, communicative expertise—also known as interactional expertise—without having contributory expertise in the other practice, exchange and collaboration on the same project.

Borrowing and cross-fertilization across divides and over distances concern more than theories and change the respective practices, processes and products. They extend to data, concepts, models, methods, evidence, heuristic techniques, technology and other resources. To mention one distinction, while theoretical integration concerns concepts, ontology, explanations, models, etc., practical integration concerns methods, heuristics and testing (Grantham 2004). Thus, accounts including different versions of taxonomical pluralism range from the more conventional and contingent (from Elgin 1997 to astronomical kinds in Ruphy 2016) and the more grounded in contexts of practices (categorization work in Bowker and Star 1999; life sciences and chemistry in Kendig 2016) and the interactive (Hacking’s interactive kinds in the human sciences) to the more metaphysically substantive. Some methodological prescriptions of pluralism rely on pluralism in metascientific research including history (Chang 2012).

Connective varieties of pluralism have been endorsed on grounds of their epistemic value. Considerations of empirical adequacy and predictive power can be traced back to Neurath, the explanatory and methodological value of cross-fertilization, the epistemic benefits of a stance of openness to new kinds of facts and ways of inquiry and learning (Wylie 2015) and evidential value.

From the standpoint of evidence, we find second-order varieties of mixed evidence patterns—triangulation, security and integration—that connect different kinds of evidence to provide enhanced evidential value. The relevant plurality or difference requires independence, a condition to be explicated in different ways (Kuorikoski and Marchionni 2022). Regarding triangulation, enhanced evidence results from multiple theory-laden but theoretically independent lines of evidence in, for instance, microscopy (Hacking 1983), archeology (Wylie 1999) and interdisciplinary research in the social sciences, for example, neuroeconomics in support of existence claims about phenomena and descriptions of phenomena but not to more general theories about them (Kuorikoski and Marchionni 2016).

Connective forms of pluralism have been modeled in terms of relations between disciplines (see above) and defended at the level of social epistemology by analogy with political models of liberal democracy and as a model of social governance between the extremes of so-called consensual mainstreaming and antagonistic exclusivism (van Bouwel 2009a). Through dialogue, a plurality of represented perspectives enables the kind of critical scrutiny of questions and knowledge claims that exposes error, bias, unexamined norms of justification and acceptance and the complexity of issues and relevant implications. As a result, it has been argued, pluralism supports a corresponding standard of critical objectivity (Longino 2002; Wylie 2015).

  • Internal vs. external pluralism . From a methodological standpoint, an internal perspective is naturalistic in its reliance on the contingent plurality of scientific practice by any of its standards. This has been defended by members of the so-called Minnesota School (Kellert, Longino and Waters 2006) and Ruphy (2016). The alternative, which Ruphy has attributed to Dupré and Cartwright, is the adoption of a metaphysical commitment external to actual scientific practice.

As a matter of actual practice, pluralism has been identified as part of a plurality of projects and perspectives in, for instance, cognitive and mind-brain sciences, where attitudes towards a plurality of ways of researching and understanding cognition vary (Milkowski and Hohol 2021). These attitudes include (1) lamenting the variability of meaning and systems of classification (Sullivan 2007; Cunningham 2021); (2) embracing complementarity between hierarchical mechanistic and computational approaches—in the face of, for instance, models with propositional declarative and lawlike statements and computational models with software codes; (3) seeking integration with mutual consistency and evidential support; (4) seeking reduction privileging neural models; and (5) seeking grand unification that values simplicity and generality over testability.

The preference for one kind of pluralism over another is typically motivated by epistemic virtues or constraints. Meta-pluralism, or pluralism about pluralism, is obviously conceivable in similar terms, as it can be found in the formulation of the so-called pluralist stance (Kellert, Longino and Waters 2006). The pluralist stance replaces metaphysical principles with scientific or empirical methodological rules and aims that have been “tested”. Like Dupré’s and Cartwright’s metaphysical positions, its metascientific position must be empirically tested. Metascientific conclusions and assumptions cannot be considered universal or necessary, but are local, contingent and relative to scientific interests and purposes. Thus, on this view, complexity does not always require interdisciplinarity (Kellert 2008), and in some situations the pluralist stance will defend reductions or specialization over interdisciplinary integration (Kellert, Longino and Waters 2006; Cat 2012; Rescher 1993).

From Greek philosophy to current debates, justifications for adopting positions on matters of unification have varied from the metaphysical and theological to the epistemic, social and pragmatic. Whether as a matter of truth (Thalos 2013) or consequence, views on matters of unity and unification make a difference in both science and philosophy, and, by application, in society as well. In science they provide strong heuristic or methodological guidance and even justification for hypotheses, projects, and specific goals. In this sense, different rallying cries and idioms such as simplicity, unity, disunity, emergence or interdisciplinarity, have been endowed with a normative value. Their evaluative role extends broadly. They are used to provide legitimacy, even if rhetorically, in social contexts, especially in situations involving sources of funding and profit. They set a standard of what carries the authority and legitimacy of what it is to be scientific. As a result, they make a difference in scientific evaluation, management and application, especially in public domains such as healthcare and economic decision-making. For instance, pointing to the complexity of causal structures challenges traditional deterministic or simple causal strategies of policy decision-making with known risks and unknown effects of known properties (Mitchell 2009). Last but not least is the influence that implicit assumptions about what unification can do have on science education (Klein 1990).

Philosophically, assumptions about unification help choose what sort of philosophical questions to pursue and what target areas to explore. For instance, fundamentalist assumptions typically lead one to address epistemological and metaphysical issues in terms of only results and interpretations of fundamental levels of disciplines. Assumptions of this sort help define what counts as scientific and shape scientistic or naturalized philosophical projects. In this sense, they determine, or at least strongly suggest, what relevant science carries authority in philosophical debate.

At the end of the day, one should not lose sight of the larger context that sustains problems and projects in most disciplines and practices. We are as free to pursue them as Kant’s dove is free to fly, that is, not without the surrounding air resistance to flap its wings upon and against. Philosophy was once thought to stand for the systematic unity of the sciences. The foundational character of unity became the distinctive project of philosophy, in which conceptual unity played the role of the standard of intelligibility. In addition, the ideal of unity, frequently under the guise of harmony, has long been a standard of aesthetic virtue, although this image has been eloquently challenged by, for instance, John Bailey and Iris Murdoch (Bailey 1976; Murdoch 1992). Unities and unifications help us meet cognitive and practical demands upon our life as well as cultural demands upon our self-images that are both cosmic and earthly. It is not surprising that talk of the many meanings and levels of unity—the fundamental level, unification, system, organization, universality, simplicity, atomism, reduction, harmony, complexity or totality—can place an urgent grip on our intellectual imagination.

  • Auyang, S., 1995, How is Quantum Field Theory Possible? New York: Oxford University Press.
  • Baigrie, B.S., 1996, Picturing Knowledge , Toronto: University of Toronto Press.
  • Bailey, J., 1976, The Uses of Division: Unity and Disharmony in Literature , New York: Viking Press.
  • Balzer, W., and Moulines, C. U., 1996, eds., Structuralist Theory of Science, Focal Issues, New Results , Berlin: de Gruyter.
  • Barnes, E., 1992, “Explanatory unification and scientific understanding”, Proceedings of the 1992 Biennial Meeting of the Philosophy of Science Association , 1: 3–12.
  • Batterman, R., 2002, The Devil in the Details , New York: Oxford University Press.
  • Beatty, J., 1995, “The Evolutionary Contingency Thesis”, in G. Wolders and J. Lennox, eds., Concepts, Theories, and Rationality in the Biological Sciences , Pittsburgh: Pittsburgh University Press, 83–97.
  • Bechtel, W., 1987, “Psycholinguistics as a Case of Cross-Disciplinary Research”, Synthese , 72(3): 293–311.
  • Bedau, M.H., 2003, “Downward Causation and Autonomy in Weak Emergence”, in Bedau and Humphreys 2008, 155–188.
  • Bedau, M.H. and P. Humphreys, eds., 2008, Emergence , Cambridge, MA: MIT Press.
  • Belot, G., 2005, “Whose devil? Which details?”, Philosophy of Science , 72: 128–153.
  • Bertoloni Meli, D., 2006, Thinking with Objects: The Transformation of Mechanics in the Seventeenth Century , Baltimore: Johns Hopkins University Press.
  • Bishop, R., 2006, “Patching physics and chemistry together”, Philosophy of Science , 72: 710–722.
  • Bishop, R.C., 2012, “Fluid Convection, Constraint, and Causation”, Interface Focus , 2: 4–12.
  • Bishop, R. C. and Atmanspacher, H. 2006 “Contextual Emergence in the Description of Properties”, Foundations of Physics , 36: 1753–1777.
  • Bokulich, A., 2008, Reexamining the Quantum-Classical Relation: Beyond Reductionism and Pluralism , Cambridge: Cambridge University Press.
  • Bowker, G. and S. Star, 1999, Sorting Things Out: Classification and Its Consequences , Cambridge, MA: MIT Press.
  • Boyer-Kassem, T., C. Mayo-Wilson and M. Weisberg (eds.), 2018, Scientific Collaboration and Collective Knowledge , New York: Oxford University Press.
  • Breitenbach, A. and Y. Choi, 2017, “Pluralism and the unity of science”, The Monist , 100: 391–405.
  • Brigandt, I. 2010. “Beyond Reduction and Pluralism: Toward an Epistemology of Explanatory Integration in Biology.” Erkenntnis , 73(3): 295–311.
  • Campbell, D., 1974, “”Downward causation“ in hierarchically organized biological systems”, in F.J. Ayala and T. Dobzhansky, eds., Studies in the Philosophy of Biology , Los Angeles: University of California Press, 179–186.
  • Cartwright, N., 1983, How the Laws of Physics Lie , Oxford: Oxford University Press.
  • –––, 1989, Nature’s Capacities and their Measurement , Oxford: Oxford University Press.
  • –––, 1999, The Dappled World: A Study of the Boundaries of Science , Cambridge: Cambridge University Press.
  • –––, 2007, Hunting Causes and Using Them: Approaches in Philosophy of Economics , Cambridge: Cambridge University Press.
  • Cartwright, N., J. Cat, L. Fleck and Thomas Uebel, 1996, Otto Neurath: Philosophy Between Science and Politics , Cambridge: Cambridge University Press.
  • Cat, J., 1998, “The physicists’ debates on unification in physics at the end of the 20th century”, Historical Studies in the Physical and Biological Sciences , vol. 28. part 2: 253–300.
  • –––, 2000, “Must the microcausality condition be interpreted causally? Beyond reduction and matters of fact”, Theoria , 37: 59–85.
  • –––, 2012, “Essay Review, S. Kellert, H. Longino and K. waters, eds., Scientific Pluralism ”, Philosophy of Science , 79: 317–325.
  • –––, 2014, “Maxwell’s color statistics: from reduction of visible errors to reduction to invisible molecules”, Studies in History and Philosophy of Science , 48: 60–75.
  • –––, 2016, “The performative construction of natural kinds: mathematical application as practice”, in C. Kendig (ed.), Natural Kinds and Classification in Scientific Practice , New York: Routledge, 87–105.
  • –––, 2021, “The unity of science”, in T.E. Uebel and Ch. Limbeck-Lilienau (eds.), Routledge Handbook of Logical Empiricism , London: Routledge, 176–184.
  • –––, 2022a, “The metaphysical elements of the unity of science”, Metascience , 31(1): 93–96.
  • –––, 2022b, “Synthesis and similarity in science: analogy in the application of mathematics and application of mathematics to analogy”, in S. Wuppuluri and A.C. Grayling, (eds.), Metaphors and Analogies in Sciences and Humanities , Cham: Springer, Synthese Library, 115–145.
  • Cat, J., and N.W. Best, 2023, “Atomic number and isotopy before nuclear structure: multiple standards and evolving collaboration of chemistry and physics”, Foundations of Chemistry , 35: 66–99. doi:10.1007/s10698-022-09450-x
  • Causey, R., 1977, Unity of Science , Dordrecht: Reidel.
  • Chakravartty, A., 2011, “Scientific realism and ontological relativity”, The Monist , 94: 157–180.
  • Chang, H., 2012, Is Water H2O? Evidence, Realism and Pluralism , Dordrecht: Springer.
  • Churchland, P.M., 1981, “Eliminative materialism and the propositional attitudes”, Journal of Philosophy , 78: 67–90.
  • Churchland, P.S., 1986, Neurophilosophy: Toward a Unified Science of the Mind/Brain , Cambridge, MA: MIT Press.
  • Cleland, C., 2002, “Methodological and epistemic difference between historical science and experimental science”, Philosophy of Science , 69 (September): 474–496.
  • Collins, H. and R. Evans, 2007, Rethinking Expertise , Chicago: University of Chicago Press.
  • Comte, A. (1830–1842) Cours de Philosophie Positive , Paris: Littré.
  • Craver, C.F., 2007, Explaining the Brain. Mechanisms and the Mosaic Unity of Neuroscience , Oxford: Clarendon Press.
  • Crutchfield, J., 1994, “The calculi of emergence: computation, dynamics and induction”, Physica D , 75: 11–54.
  • Crutchfield J., Hanson J., 1997, “Computational mechanics of cellular automata: an example”, Physica D , 103: 169–189.
  • Cummings, J.N. and S. Kiesler, 2005, “Collaborative research across disciplinary and organizational boundaries”, Social Studies of Science , 35(5): 703–722.
  • Cunningham, B., 2021, “A prototypical conceptualization for mechanisms”, Studies in the History and Philosophy of Science , 85: 79–91.
  • Currie, A., 2019, Scientific Knowledge and the Deep Past: History Matters , Cambridge: Cambridge University Press.
  • D’Alembert, J. and D. Diderot, eds., 1751–1772, Encyclopédie, ou dictionnaire raisonné des sciences, des arts at des métiers , Paris: Plon.
  • Da Costa and S. French, 2003, Science and Partial Truth: A Unitary Approach to Models and Scientific Reasoning , Oxford: Oxford University Press.
  • Darden, L., 2006, Reasoning in Biological Discoveries: Essays on Mechanisms, Interfield Theories, and Anomaly Resolution , Cambridge: Cambridge University Press.
  • Darden, L. and N. Maull, 1977, “Interfield theories”, Philosophy of Science , 44: 43–64.
  • Daston, L. and P. Galison, 2007, Objectivity , Cambridge, MA: MIT Press.
  • Davidson, D., 1969, “The individuation of events”, in N. Rescher, ed., Essays in Honor of Carl G. Hempel , Dordrecht: Reidel, 216–34.
  • De Regt, H.W. and D. Dieks, 2005, “A contextual approach to scientific understanding”, Synthese , 144: 137–170.
  • Dupré, J., 1993, The Disorder of Things. Metaphysical Foundations of the Disunity of Science , Cambridge, MA: Harvard University Press.
  • Ellis, G.F.R., D. Noble and T. O’Connor, 2012, “Top-down causation: an integrating theme within and across the sciences?”, Interface Focus , (February 6), 2(1): 1–3.
  • Elgin, C.Z., 1996, Considered Judgment , Princeton, NJ: Princeton University Press.
  • –––, 1997, Between the Absolute and the Arbitrary , Ithaca, NY: Cornell University Press.
  • Ereshefsky, M., 1992, “Eliminative pluralism”, Philosophy of Science , 59: 671–90.
  • Eronen, M.I., 2013, “No levels, no problems: downward causation in neuroscience”, Philosophy of Science , 80: 1042–1052.
  • Feyerabend, P.K., 1962, “Explanation, reduction and empiricism”, in H. Feigl and G. Maxwell, eds., Minnesota Studies in the Philosophy of Science , vol. 3, Minneapolis: University of Minnesota Press, 28–97.
  • Fodor, J., 1974, “Special sciences, or the disunity of science as a working hypothesis”, Synthese , 28: 77–115.
  • Forster, M. and E. Sober, 1994, “How to tell when simpler, more unified, or less ad hoc theories will provide more accurate predictions”, British Journal for Philosophy of Science , 45: 1–35.
  • Friedman, M., 1974, “Explanation and scientific understanding”, Journal of Philosophy , 71: 5–19.
  • –––, 1992, Kant and the Exact Sciences , Cambridge, MA: Harvard University Press.
  • Frost-Arnold, G., 2005, “The large scale structure of logical empiricism: unity of science and the elimination of metaphysics”, Philosophy of Science , 72(5): 826–838.
  • Galison, P., 1997, Image and Logic , Chicago: University of Chicago Press.
  • –––, 1998, “The americanization of unity of science”, Daedalus , 127 (Winter): 45–71.
  • Galison, P. and D. Stump, eds.,1996, The Disunity of Science. Boundaries, Contexts and Power , Stanford: Stanford University Press.
  • Garber, D., 1992, Descartes’ Metaphysical Physics , Chicago: University of Chicago Press.
  • Gaukroger, S., 2002, Descartes’ System of Natural Philosophy , Cambridge: Cambridge University Press.
  • Gavroglu, K and A. Simões, 2012, Neither Physics Nor Chemistry. A History of Quantum Chemistry , Cambridge, MA: MIT Press.
  • Giere, R.N., 1999, Science Without Laws. Science and Its Conceptual Foundations , Chicago: Chicago University Press.
  • –––, 2006, Scientific Perspectivism , Chicago: University of Chicago Press.
  • Gillett, C., 2016, Reduction and Emergence in Science and Philosophy , Cambridge: Cambridge University Press.
  • Glennan, S., 1996, “Mechanisms and the nature of causation”, Erkenntnis , 44: 49–71.
  • Glymour, C., 1969, “On some patterns of reduction”, Philosophy of Science , 37(3): 340–353.
  • Glynn, 2010, Elegance in Science: The Beauty of Simplicity , New York: Oxford University Press.
  • Gorman, M.E., 2002, “Levels of expertise and trading zones. A framework for multidisciplinary collaboration”, Social Studies of Science , 32(5/6): 933–938.
  • Graff, H.J., 2015, Undisciplining Knowledge. Interdisciplinarity in the Twentieth Century , Baltimore: Johns Hopkins University Press.
  • Grantham, T., 2004, “Conceptualizing the (dis)unity of science”, Philosophy of Science , 71: 133–155.
  • Grene, Marjorie (ed.), 1969a, Anatomy of Knowledge , London: Routledge and Kegan Paul.
  • ––– (ed.), 1969b, Toward a Unity of Knowledge , New York: International Universities Press.
  • ––– (ed.), 1971, Interpretations of Life and Mind: Essays around the Problem of Reduction , London: Routledge & Kegan Paul, New York: Humanities Press.
  • Griffiths, P.E., 2022, “Why the ‘interdisciplinarity’ push in universities is actually a dangerous antidisciplinarity trend”, The Conversation , Feb. 16, 2022 [ Griffiths 2022 available online ].
  • Grünbaum, A. and W.C. Salmon, eds.,1988, The Limitations of Deductivism , Berkeley: University of California Press.
  • Grüne-Yanoff, T., 2016, “Interdisciplinary success without integration”, European Journal for Philosophy of Science , 6: 343–360.
  • Hacking, I., 1983, Representing and Intervening , Cambridge: Cambridge University Press.
  • –––, 1996, “The disunities of science”, in P. Galison and D. Stump 1996.
  • Halonen, I. and J. Hintikka, 1999, “Unification—it’s magnificent but is it explanation?”, Synthese , 120: 27–47.
  • Hardcastle, G.L., 2003, “Debabelizing science: the Harvard Science of Science discussion group 1940–41”, in G.L. Hardcastle and A. Richardson, eds., 2003, Logical Empiricism in North America , Minneapolis: University of Minnesota Press., 170–196.
  • Healey, R., 1991, “Holism and nonseparability”, Journal of Philosophy , 88: 393–421.
  • Haug, M.C., 2010, “Realization, determination, and mechanisms”, Philosophical Studies , 150: 313–330.
  • Hoefer, C., 2003, “For fundamentalism”, Philosophy of Science , 70: 1401–1412.
  • Holton, G., 1973, The Thematic Origins of Scientific Thought , Cambridge, MA: Harvard University Press.
  • –––, 1993, “The Vienna Circle in exile: An eye-witness report”, in F. Stadler, ed., Yearbook: Vienna Circle Lectures , Boston: Kluwer.
  • Holton, G., H. Chang and E. Jurkowitz, 1996, “How a scientific discovery is made: a case history”, American Scientist , 84(4): 364–375.
  • Howhy, J., 2003, “Capacities, explanation and the possibility of disunity”, International Studies in the Philosophy of Science , 17(2): 179–190.
  • Hoyningen-Huene, P., 2013, Systematicity , New York: Oxford University Press.
  • Hubbs, G., M. O‘Rourke and S.H. Orzack (eds.), 2020, The Toolbox Dialogue Initiative: The Power of Cross-Disciplinary Practice , New York: Routledge.
  • Hull, D., 1988, Science as Progress , Chicago: University of Chicago Press.
  • Humphreys, P., 1997, “How properties emerge”, Philosophy of Science , 64: 1–17.
  • –––, 2004, Extending Ourselves: Computational Science, Empiricism, and Scientific Method , Oxford: Oxford University Press.
  • –––, 2008, “Synchronic and diachronic emergence”, Minds & Machines , 18: 431–442
  • Humphreys, P., Huneman P., 2008, “Dynamical Emergence and Computation: an introduction”, Minds & Machines , 18: 425–430.
  • Huneman, P., 2008a, “Emergence and adaptation”. Minds and Machines , 18: 493–520.
  • Huneman, P., 2008b, Combinatorial vs. computational emergence: emergence made ontological?, Philosophy of science , 75: 595—607.
  • Huneman, P., 2010. “Determinism and predictability: lessons from computational emergence”, Synthese , 185(2): 195–214.
  • Jacobs, J.A., 2013, In Defense of Disciplines. Interdisciplinarity and Specialization in Research University , Chicago: University of Chicago Press.
  • Janiak, A., Newton as Philosopher , Cambridge: Cambridge University Press.
  • Jones, C. and P. Galison, eds., 1998, Picturing Science, Producing Art , London: Routledge.
  • Kaiser, D., 2005, Drawing Theories Apart: The Dispersion of Feynman Diagrams in Postwar Physics , Chicago: University of Chicago Press.
  • Kincaid, H., 1997, Individualism and the Unity of Science , Maryland: Rowman & Littlefield.
  • Karaca, K., 2012, “Kitcher’s explanatory unification, Kaluza-Klein theories, and the normative aspect of higher dimensional unification in Physics”, British Journal for the Philosophy of Science , 63(3): 287–312.
  • Kamminga, H. and G. Somsen, eds., 2016, Pursuing the Unity of Science: Ideology and Scientific Practice from the Great War to the Cold War , Abingdon, Oxon.: Routledge.
  • Kellert, S.H., 2008, Borrowed Knowledge. Chaos Theory and the Challenge of Learning Across Disciplines , Chicago: University of Chicago Press.
  • Kellert, S.H., H.E. Longino and C.K. Waters, eds., 2006, Scientific Pluralism , Minneapolis: University of Minnesota Press.
  • Kendig, C., ed., 2016, Natural Kinds and Classification in Scientific Practice , New York: Routledge.
  • Kevles, D. and L. Hood, 1992, The Code of Codes: Scientific and Social Issues in the Human Genome Project , Cambridge, MA: Harvard University Press.
  • Khalidi, M.A., 1998, “Natural kinds and cross-cutting categories”, Journal of Philosophy , 95(1): 33–50.
  • –––, 2013, Natural Categories and Human Kinds: Classification in the Natural and Social Sciences , Cambridge: Cambridge University Press.
  • Kim, J., 1993, Supervenience and Mind . Cambridge: Cambridge University Press.
  • Kincaid, H., 1996, Individualism and The Unity of Science , Lanham: Rowman and Littlefield.
  • –––, 1997, Philosophical Foundations of the Social Sciences , Cambridge: Cambridge University Press.
  • Kincaid, H, and J. van Bouwel, eds., 2023, The Oxford Handbook of Philosophy of Political Science, Oxford: Oxford University Press.
  • Kitcher, P., 1981, “Explanatory unification”, in Philosophy of Science , 48: 507.
  • –––, 1984, “1953 and all that: A tale of two sciences”, Philosophical Review , 93: 335–373.
  • –––, 1986, “Projecting the order of nature”, in Kant’s Philosophy of Physical Science , R. E. Butts, ed., Dordrecht: Reidel, 201–235.
  • –––, 1989, “Explanatory unification and the causal structure of the world” in P. Kitcher & W. Salmon, eds., Scientific Explanation , 410–505. Minneapolis: University of Minnesota Press.
  • Klein, J.T., 1990, Interdisciplinarity: History, Theory & Practice , Detroit: Wayne State university Press.
  • Knorr-Cetina, K., 1992, Epistemic Cultures: How Scientists Make Sense , Cambridge, MA: Harvard University Press.
  • Kockelmans, J., ed., 1979, Interdisciplinarity and Higher Education , University Park: The Pennsylvania State University Press.
  • Koster, E., 2009, “Understanding in historical science. Intelligibility and judgment”, in H.W. de Regt, S. Leonelli and K. Eigner, eds., 2009, Scientific Understanding , Pittsburgh, Pa: University of Pittsburgh Press: 314–333.
  • Kuhn, T.S., 1962, The Structure of Scientific Revolutions , Chicago: University of Chicago Press.
  • Kuorikoski, J. and C. Marchionni, 2016, “Triangulation across the lab, th scanner and the field: the case of social preferences”, in European Journal of Philosophy of Science , 6: 361–376.
  • –––, 2023, “Evidential variety and mixed methods research in social science”, Philosophy of Science , published online 16 February 2023. doi:10.1017/psa.2023.34
  • Kusch, Martin, 2010, “Hacking’s historical epistemology: a critique of styles of reasoning”, Studies in History and Philosophy of Science (Part A), 41(2): 158–173.
  • Lange, M., 1995, “Are there natural laws concerning particular species?”, Journal of Philosophy , 112: 43–451.
  • –––, 2004, “Bayesianism and unification: A reply to Myrvold”, Philosophy of Science , 71: 205–215.
  • Laursen, B.K., Ch. Gonnerman and S.J. Crowley, 2021, “Improving philosophical dialogue interventions to better resolve problematic value pluralism in collaborative environmental science”, Studies in History and Philosophy of Science , 87: 54–71.
  • Longino, H., 1998, Science as Social Knowledge , Princeton, NJ: Princeton University Press.
  • –––, 2002, The Fate of Knowledge , Princeton, NJ: Princeton University Press.
  • –––, 2013, Studying Human Behavior: How Scientists Investigate Aggression and Sexuality , Chicago: University of Chicago Press.
  • Lopes, D., 2009, “Drawing in a Social Science”, Perspectives on Science , 17(1): 5–25.
  • Love, A.C. and A. Hüttemann (2011), “Comparing part-whole explanations in biology and physics”, in D. Dieks, W.J. Gonzalez, S. Hartmann, T. Uebel, and M. Weber, eds., Explanation, prediction, and confirmation , Berlin: Springer, 183–202.
  • Lynch, M.P., 1998, Truth in Context. An Essay on Objectivity and Pluralism , Cambridge, MA: MIT Press.
  • Lynch, M. and S. Woolgar, 1990, Representation in Scientific Practice , Cambridge, MA: MIT Press.
  • MacLeod, M. and N.J. Nersessian, 2016, “Interdisciplinary problem-solving: emerging modes in integrative systems biology”, European Journal for Philosophy of Science , 6: 401–418.
  • Mäki, U., 2016, “Philosophy of interdisciplinarity. what? why? how?”, European Journal for Philosophy of Science , 6: 327–342.
  • Mäki, U. and M. MacLeod, 2016, “Interdisciplinarity in actiin: philosophy of science perspectives”, European Journal for Philosophy of Science , 6: 323–326.
  • Massimi, M and C.D. McCoy (eds.), 2020, Understanding Perspectivism: Scientific Challenges and Methodological Prospects , London: Routledge.
  • McAllister, J., 1996, Beauty and Revolution in Science , Ithaca, NY: Cornell University Press.
  • McArthur, D., 2006, “Contra Cartwright: structural realism, ontological pluralism and fundamentalism about laws”, Synthese , 151: 233–255.
  • McGivern, P., 2008, “Reductive levels and multi-scale structure”, Synthese , 165: 53–75.
  • McGrew, T., 2003, “Conformation, heuristics, and explanatory reasoning”, British Journal for the Philosophy of Science , 54: 553–567.
  • McLaughlin, P, 1991, Kant’s critique of teleology in biological explanation , Evanston: Mellen Press.
  • Meehl, P. and W. Sellars, 1956, “The Concept of emergence,” in H. Feigl, ed., The Foundations of Science and the Concepts of Psychology and Psychoanalysis , Minneapolis: University of Minnesota Press, 239–252.
  • Meyering, T.C., 2000, “Physicalism and downward causation in psychology and the special sciences”, Inquiry , 43: 181–202.
  • Milkowski, M. and M. Hohol, 2021, “Explanations in cognitive science: unification versus pluralism”, Synthese , 199 (Supplement 1): 1–17.
  • Mill, J.S., 1843, System of Logic , London: John W. Parker.
  • Mitchell, S.D., 2003, Biological Complexity and Integrative Pluralism , Cambridge: Cambridge University Press.
  • ––– 2009, Unsimple Truths. Science, Complexity and Policy , Chicago: University of Chicago Press.
  • Morgan, M. and M. Morrison, 1999, Models as Mediators , Cambridge: Cambridge University Press.
  • Morgan, M. and N.M. Wise (eds.), 2017, Special Issue on Narrative in Science, Studies in the History and Philosophy of Science , Volume 82.
  • Morrison, M., 2000, Unifying Physical Theories. Physical Concepts and Mathematical Structures , New York: Cambridge University Press.
  • Moulines, C.U., 2006, “Ontology, reduction, emergence: A general frame”, Synthese , 151: 313–323.
  • Murdoch, I., 1992, Metaphysics as a Guide to Morals , London: Allen Lane.
  • Myrvold, W., 2003, “A Bayesian account of the virtue of unification”, Philosophy of Science , 70: 399–423.
  • –––, 2017, “On the evidential import of unification”, Philosophy of Science , 84(1): 92–114.
  • Nagel, E., 1951, The Structure of Science , New York: Harcourt, Brace and World.
  • Nathan, M.J., 2017, “Unificatory explanation”, British Journal for Philosophy of Science , 68: 163–186.
  • Nersessian, N.J., 2022, Interdisciplinarity in the Making , Cambridge, MA: MIT Press.
  • Nickles, T, 1973, “Two concepts of intertheoretic reduction”, Journal of Philosophy , 70: 181–201.
  • Oppenheim, P. and H. Putnam, 1958, “The unity of science as a working hypothesis”, in H. Feigl, et al. (eds.), Minnesota Studies in the Philosophy of Science , vol. 2, Minneapolis: Minnesota University Press.
  • Orrell, D., 2012, Truth or Beauty: Science or the Quest for Order , New Haven: Yale University Press.
  • Osbeck, L. M., Nersessian, N. J., Malone, K. R., and W.C. Newstetter, 2011, Science as Psychology , Cambridge: Cambridge University Press.
  • Patrick, K., 2018, “Unity as an epistemic virtue”, Erkenntnis , 83: 893–1002.
  • Pearce, T., 2015, “‘Science organized’: positivism and the Metaphysical Club, 1865–1875”, Journal of the History of Ideas , 76(3): 441–465.
  • Plutynski, A., 2005, “Explanatory unification and the early synthesis”, British Journal for Philosophy of Science 56: 595–609.
  • Poirier, P., 2006, “Finding a place for elimination in inter-level reductionist activities: Reply to Wimsatt”, Synthese , 151: 477–483.
  • Popper, K.R., 1935/1951, The Logic of Scientific Discovery , London: Unwin and Allen.
  • Potochnik, A., 2017, Idealization and the Aims of Science , Chicago: University of Chicago Press.
  • Putnam, H., 1975, “The nature of mental states”, in Philosophical Papers (Volume 2), Cambridge: Cambridge University Press, 429–440.
  • Ramsey, J., 1995, “Construction by reduction”, Philosophy of Science , 62: 1–20.
  • Redhead, M.L.G. and P. Teller, 1991, “Particles, Particle Labels, and Quanta: the Toll of Unacknowledged Metaphysics”, Foundations of Physics , 21: 43–62.
  • Repko, A.F., 2012, Interdisciplinarity: Process and Theory , Thousand Oaks: Sage.
  • Rescher, N., 1993, Pluralism , Oxford: Clarendon Press.
  • Richards, R., 2008, The Tragic Sense of Life: Ernst Haeckel and the Struggle over Evolutionary Thought , Chicago: University of Chicago Press.
  • Robinson, W.S., 2005, “Zooming in on downward causation”, Biology and Philosophy , 20: 117–136.
  • Roche, W. and E. Sober, 2017, “Explanation = unification? A new criticism of Friedman’s theory and a reply to an old one”, Philosophy of Science , 84(3): 391–413.
  • Rohrlich, F., 2001, “Realism despite cognitive antireductionism”, International Studies in the Philosophy of Science , 18: 73–88.
  • Rosenberg, A., 1994, Instrumental Biology, or the Disunity of Science , Chicago: University of Chicago Press.
  • Rueger, A., 2005, “Perspectival models and theory unification”, British Journal for Philosophy of Science , 56: 579–594.
  • Ruphy, S., 2005, “Why metaphysical abstinence should prevail in the debate on reductionism”, International Studies in the Philosophy of Science , 19: 105–121.
  • –––, 2016, Scientific Pluralism Reconsidered , Pittsburgh: University of Pittsburgh Press.
  • Ruttkamp, E., 2002, A Model-theoretic Realistic Interpretation of Science , Dordrecht: Kluwer.
  • Ruttkamp, E.B. and J. Heidema, 2005, “Reviewing reduction in a preferential model-theoretic context”, International Studies in Philosophy of Science , 19: 123–146.
  • Salmon, W., 1998, Causality and Explanation , New York: Oxford University Press.
  • Sarkar, S.,1998, Genetics and Reductionism , New York: Cambridge University Press.
  • Scerri, E., 1994, “Has chemistry been at least approximately reduced to quantum mechanics?” Proceedings of the 1994 Biennial Meeting of the Philosophy of Science Association (Volume 1): 160–170.
  • Schaffner, K., 1967, “Approaches to reductionism”, Philosophy of Science , 34: 137–147.
  • –––, 1993, Discovery and Explanation in Biology and Medicine , Chicago: University of Chicago Press.
  • –––, 2006, “Reduction: the Cheshire cat problem and a return to roots”, Synthese , 151: 377–402.
  • Schupach, J., 2005, “On a Bayesian analysis of the virtue of unification”, Philosophy of Science , 72: 594–607.
  • Schurz, G., 1999, “Explanation as unification”, Synthese , 120: 95–114.
  • –––, 2015, “Causality and unification: how causality unifies statistical regularities”, Theoria , 30(1): 73–95.
  • Schurz, G. and K. Lambert, 1994, “Outline of a theory of scientific understanding”, Synthese , 101: 65–120.
  • Simon, H., 1996, The Sciences of the Artificial , Cambridge, MA: The MIT Press.
  • Sklar, L., 1967, “Types of inter-theoretic reduction”, British Journal for Philosophy of Science , 18: 109–124.
  • –––, 2003, “Dappled theories in a uniform world”, Philosophy of Science , 70: 424–441.
  • Slater, M.H., 2005, “Monism on the one hand, pluralism on the other”, Philosophy of Science , 72: 22–42.
  • Smith, J., 2011, Divine Machines: Leibniz and the Sciences of Life , Princeton, NJ: Princeton University Press.
  • Smith, L.D., L.A. Best, D.A. Stubbs, J. Johnston and A.B. Archibald, 2000, “Scientific graphs and the hierarchy of the sciences: a Latourian survey of inscription practices”, Social Studies of Science , 30(1): 73–94.
  • Smocovitis, B., 1996, Unifying Biology , Princeton, NJ: Princeton University Press.
  • Snyder, L.J., 2006, Reforming Philosophy: A Victorian Debate on Science and Society , Chicago: University of Chicago Press.
  • Sober, E., 1999, “The Multiple realizability argument against reductionism”, Philosophy of Science , 66: 542–564.
  • –––, 2003, “Two uses of unification”, in F. Stadler (ed.), The Vienna Circle and Logical Empiricism , Dordrecht: Kluwer, 205–216.
  • –––, 2016, Ockham’s Razors. A User’s Manual , New York: Cambridge University Press.
  • Spector, M., 1978, Concepts of Reduction in Physical Science , Philadelphia: Temple University Press.
  • Sullivan, J.A., 2017, “Coordinated pluralism as a means to facilitate integrative taxonomies of cognition”, Philosophical Explorations , 20(2): 129–145.
  • Suppe, F., 1977, The Structure of Scientific Theories , Urbana and Chicago: University of Illinois Press.
  • Suppes, P, 1978, “The plurality of science”, P. Asquith and I. Hacking (eds.), PSA 1978: Proceedings of the 1978 Biennial Meeting of the Philosophy of Science Association (Volume 2), East Lansing, MI: Philosophy of Science Association: 3–16.
  • Tahko, T.E., 2021, Unity of Science , Cambridge: Cambridge University Press.
  • Teller, P., 2004, “How we dapple the world”, Philosophy of Science , 71: 425–447.
  • Thalos, M., 2013, Without Hierarchy. The Scale Freedom of the Universe , New York: Oxford University Press.
  • Toulmin, S., 1970, Physical Reality: Philosophical Essays on Twenty-century Physics , New York: Harper and Row.
  • Uebel, T., 2007, Empiricism at the Crossroads: The Vienna Circle’s Protocol-Sentence Debate , Chicago: Open Court.
  • van Bouwel, J., 2009a, “The problem with(out) consensus: the scientific consensus, deliberative democracy and agonistic pluralism”, in J. van Bouwel, ed., The Social Sciences and Democracy , London: Palgrave Macmillan.
  • van Bouwel, J. (ed.), 2009b, The Social Sciences and Democracy , London: Palgrave Macmillan.
  • van Gulick, R., 1992, “Nonreductive materialism and the nature of intertheoretical constraint”, in A. Beckmann, H. Flohr and J. Kim, eds., Emergence or Reduction? Essays on the Prospects of Nonreductive Physicalism , New York: de Gruyter.
  • van Riel, R., 2014, The Concept of Reduction , Cham: Springer.
  • Wayne, A., 1996, “Theoretical unity: the case of the standard model”, Perspectives on Science , 4: 391–407.
  • –––, 2002, “Critical notice: Margaret Morrison, Unifying scientific theories ”, Canadian Journal of Philosophy , 32: 117–138.
  • Weber, E. and M. Van Dyck, 2002, “Unification and explanation”, Synthese , 131: 145–154.
  • Wilson, J., 2021, Metaphysical Emergence , Oxford: Oxford University Press.
  • Wimsatt, W., 1976, “Reductionism, levels of organization and the mind-body problem”, in G. Globus, I. Savodnik and G. Maxwell, eds., Consciousness and the Brain , 199–167, New York: Plenum.
  • –––, 1980, “Reductionist research strategies and their biases in the units of selection controversy”, in T. Nickles, ed., Scientific Discovery: Case studies , Dordrecht: Reidel.
  • –––, 2006, “Reductionism and its heuristics: making methodological reductionism honest”, Synthese , 151: 445–475.
  • Winsberg, E., M. Frisch, K.M. Darling and A. Fine, 2000, “Review of Nancy Cartwright, The Dappled World: Essays on the Boundaries of Science ”, Journal of Philosophy , 97(7): 403–408.
  • Wise, M.N., 2011, “Science as historical narrative”, Erkenntnis , 75: 349–376.
  • Wolfram S., 1984, “Universality and complexity in cellular automata”, Physica D , 10: 1–35.
  • –––, 2002, A New Kind of Science , Champaign, IL: Wolfram Media.
  • Wood, A. and S.S. Hahn, 2011, The Cambridge History of Philosophy in the Nineteenth Century (1790–1870) , Cambridge: Cambridge University Press.
  • Woodward, J., 2003, Making Things Happen: A Causal Theory of Explanation , Oxford: Oxford University Press.
  • Wylie, A., 1999, “Rethinking unity as a ‘working hypothesis’ for philosophy of science: How archeologists exploit the disunities of science”, Perspectives on Science , 7: 293–317.
  • –––, 2015, “A plurality of pluralisms: Collaborative practice in archeology”, in F. Padovani, A. Richardson and J.Y. Tsou (eds.), Objectivity in Science: New Perspectives from Science and Technology Studies (Boston Studies in the Philosophy and History of Science: Volume 310), Cham: Springer, 189–210.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.
  • Seifert, V., 2019, “ Reduction and emergence in chemistry ”, entry in The Internet Encyclopedia of Philosophy .

adaptationism | Aristotle | atomism: 17th to 20th century | Bacon, Roger | -->biocomplexity --> | Carnap, Rudolf | chaos | Comte, Auguste | Condorcet, Marie-Jean-Antoine-Nicolas de Caritat, Marquis de: in the history of feminism | Democritus | Descartes, René | determinism: causal | Diderot, Denis | Dilthey, Wilhelm | economics: philosophy of | Einstein, Albert: philosophy of science | emergent properties | Empedocles | empiricism: logical | Feyerabend, Paul | Frege, Gottlob | Galileo Galilei | genetics | Hempel, Carl | Heraclitus | Hume, David | Kant, Immanuel | Leibniz, Gottfried Wilhelm | -->logical positivism --> | Mach, Ernst | many, problem of | mereology | Mill, John Stuart | monism | multiple realizability | Neurath, Otto | Newton, Isaac | Parmenides | physicalism | physics: intertheory relations in | Plato | Pythagoras | quantum mechanics | quantum theory: quantum field theory | Ramus, Petrus | reduction, scientific: in biology | Rickert, Heinrich | scientific pluralism | supervenience | Weber, Max | Whewell, William | Wittgenstein, Ludwig

Copyright © 2024 by Jordi Cat < jcat @ indiana . edu >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2024 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

Research Hypothesis In Psychology: Types, & Examples

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

A research hypothesis, in its plural form “hypotheses,” is a specific, testable prediction about the anticipated results of a study, established at its outset. It is a key component of the scientific method .

Hypotheses connect theory to data and guide the research process towards expanding scientific understanding

Some key points about hypotheses:

  • A hypothesis expresses an expected pattern or relationship. It connects the variables under investigation.
  • It is stated in clear, precise terms before any data collection or analysis occurs. This makes the hypothesis testable.
  • A hypothesis must be falsifiable. It should be possible, even if unlikely in practice, to collect data that disconfirms rather than supports the hypothesis.
  • Hypotheses guide research. Scientists design studies to explicitly evaluate hypotheses about how nature works.
  • For a hypothesis to be valid, it must be testable against empirical evidence. The evidence can then confirm or disprove the testable predictions.
  • Hypotheses are informed by background knowledge and observation, but go beyond what is already known to propose an explanation of how or why something occurs.
Predictions typically arise from a thorough knowledge of the research literature, curiosity about real-world problems or implications, and integrating this to advance theory. They build on existing literature while providing new insight.

Types of Research Hypotheses

Alternative hypothesis.

The research hypothesis is often called the alternative or experimental hypothesis in experimental research.

It typically suggests a potential relationship between two key variables: the independent variable, which the researcher manipulates, and the dependent variable, which is measured based on those changes.

The alternative hypothesis states a relationship exists between the two variables being studied (one variable affects the other).

A hypothesis is a testable statement or prediction about the relationship between two or more variables. It is a key component of the scientific method. Some key points about hypotheses:

  • Important hypotheses lead to predictions that can be tested empirically. The evidence can then confirm or disprove the testable predictions.

In summary, a hypothesis is a precise, testable statement of what researchers expect to happen in a study and why. Hypotheses connect theory to data and guide the research process towards expanding scientific understanding.

An experimental hypothesis predicts what change(s) will occur in the dependent variable when the independent variable is manipulated.

It states that the results are not due to chance and are significant in supporting the theory being investigated.

The alternative hypothesis can be directional, indicating a specific direction of the effect, or non-directional, suggesting a difference without specifying its nature. It’s what researchers aim to support or demonstrate through their study.

Null Hypothesis

The null hypothesis states no relationship exists between the two variables being studied (one variable does not affect the other). There will be no changes in the dependent variable due to manipulating the independent variable.

It states results are due to chance and are not significant in supporting the idea being investigated.

The null hypothesis, positing no effect or relationship, is a foundational contrast to the research hypothesis in scientific inquiry. It establishes a baseline for statistical testing, promoting objectivity by initiating research from a neutral stance.

Many statistical methods are tailored to test the null hypothesis, determining the likelihood of observed results if no true effect exists.

This dual-hypothesis approach provides clarity, ensuring that research intentions are explicit, and fosters consistency across scientific studies, enhancing the standardization and interpretability of research outcomes.

Nondirectional Hypothesis

A non-directional hypothesis, also known as a two-tailed hypothesis, predicts that there is a difference or relationship between two variables but does not specify the direction of this relationship.

It merely indicates that a change or effect will occur without predicting which group will have higher or lower values.

For example, “There is a difference in performance between Group A and Group B” is a non-directional hypothesis.

Directional Hypothesis

A directional (one-tailed) hypothesis predicts the nature of the effect of the independent variable on the dependent variable. It predicts in which direction the change will take place. (i.e., greater, smaller, less, more)

It specifies whether one variable is greater, lesser, or different from another, rather than just indicating that there’s a difference without specifying its nature.

For example, “Exercise increases weight loss” is a directional hypothesis.

hypothesis

Falsifiability

The Falsification Principle, proposed by Karl Popper , is a way of demarcating science from non-science. It suggests that for a theory or hypothesis to be considered scientific, it must be testable and irrefutable.

Falsifiability emphasizes that scientific claims shouldn’t just be confirmable but should also have the potential to be proven wrong.

It means that there should exist some potential evidence or experiment that could prove the proposition false.

However many confirming instances exist for a theory, it only takes one counter observation to falsify it. For example, the hypothesis that “all swans are white,” can be falsified by observing a black swan.

For Popper, science should attempt to disprove a theory rather than attempt to continually provide evidence to support a research hypothesis.

Can a Hypothesis be Proven?

Hypotheses make probabilistic predictions. They state the expected outcome if a particular relationship exists. However, a study result supporting a hypothesis does not definitively prove it is true.

All studies have limitations. There may be unknown confounding factors or issues that limit the certainty of conclusions. Additional studies may yield different results.

In science, hypotheses can realistically only be supported with some degree of confidence, not proven. The process of science is to incrementally accumulate evidence for and against hypothesized relationships in an ongoing pursuit of better models and explanations that best fit the empirical data. But hypotheses remain open to revision and rejection if that is where the evidence leads.
  • Disproving a hypothesis is definitive. Solid disconfirmatory evidence will falsify a hypothesis and require altering or discarding it based on the evidence.
  • However, confirming evidence is always open to revision. Other explanations may account for the same results, and additional or contradictory evidence may emerge over time.

We can never 100% prove the alternative hypothesis. Instead, we see if we can disprove, or reject the null hypothesis.

If we reject the null hypothesis, this doesn’t mean that our alternative hypothesis is correct but does support the alternative/experimental hypothesis.

Upon analysis of the results, an alternative hypothesis can be rejected or supported, but it can never be proven to be correct. We must avoid any reference to results proving a theory as this implies 100% certainty, and there is always a chance that evidence may exist which could refute a theory.

How to Write a Hypothesis

  • Identify variables . The researcher manipulates the independent variable and the dependent variable is the measured outcome.
  • Operationalized the variables being investigated . Operationalization of a hypothesis refers to the process of making the variables physically measurable or testable, e.g. if you are about to study aggression, you might count the number of punches given by participants.
  • Decide on a direction for your prediction . If there is evidence in the literature to support a specific effect of the independent variable on the dependent variable, write a directional (one-tailed) hypothesis. If there are limited or ambiguous findings in the literature regarding the effect of the independent variable on the dependent variable, write a non-directional (two-tailed) hypothesis.
  • Make it Testable : Ensure your hypothesis can be tested through experimentation or observation. It should be possible to prove it false (principle of falsifiability).
  • Clear & concise language . A strong hypothesis is concise (typically one to two sentences long), and formulated using clear and straightforward language, ensuring it’s easily understood and testable.

Consider a hypothesis many teachers might subscribe to: students work better on Monday morning than on Friday afternoon (IV=Day, DV= Standard of work).

Now, if we decide to study this by giving the same group of students a lesson on a Monday morning and a Friday afternoon and then measuring their immediate recall of the material covered in each session, we would end up with the following:

  • The alternative hypothesis states that students will recall significantly more information on a Monday morning than on a Friday afternoon.
  • The null hypothesis states that there will be no significant difference in the amount recalled on a Monday morning compared to a Friday afternoon. Any difference will be due to chance or confounding factors.

More Examples

  • Memory : Participants exposed to classical music during study sessions will recall more items from a list than those who studied in silence.
  • Social Psychology : Individuals who frequently engage in social media use will report higher levels of perceived social isolation compared to those who use it infrequently.
  • Developmental Psychology : Children who engage in regular imaginative play have better problem-solving skills than those who don’t.
  • Clinical Psychology : Cognitive-behavioral therapy will be more effective in reducing symptoms of anxiety over a 6-month period compared to traditional talk therapy.
  • Cognitive Psychology : Individuals who multitask between various electronic devices will have shorter attention spans on focused tasks than those who single-task.
  • Health Psychology : Patients who practice mindfulness meditation will experience lower levels of chronic pain compared to those who don’t meditate.
  • Organizational Psychology : Employees in open-plan offices will report higher levels of stress than those in private offices.
  • Behavioral Psychology : Rats rewarded with food after pressing a lever will press it more frequently than rats who receive no reward.

Print Friendly, PDF & Email

Related Articles

Mixed Methods Research

Research Methodology

Mixed Methods Research

Conversation Analysis

Conversation Analysis

Discourse Analysis

Discourse Analysis

Phenomenology In Qualitative Research

Phenomenology In Qualitative Research

Ethnography In Qualitative Research

Ethnography In Qualitative Research

Narrative Analysis In Qualitative Research

Narrative Analysis In Qualitative Research

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

How to Write a Great Hypothesis

Hypothesis Definition, Format, Examples, and Tips

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

unified hypothesis definition

Amy Morin, LCSW, is a psychotherapist and international bestselling author. Her books, including "13 Things Mentally Strong People Don't Do," have been translated into more than 40 languages. Her TEDx talk,  "The Secret of Becoming Mentally Strong," is one of the most viewed talks of all time.

unified hypothesis definition

Verywell / Alex Dos Diaz

  • The Scientific Method

Hypothesis Format

Falsifiability of a hypothesis.

  • Operationalization

Hypothesis Types

Hypotheses examples.

  • Collecting Data

A hypothesis is a tentative statement about the relationship between two or more variables. It is a specific, testable prediction about what you expect to happen in a study. It is a preliminary answer to your question that helps guide the research process.

Consider a study designed to examine the relationship between sleep deprivation and test performance. The hypothesis might be: "This study is designed to assess the hypothesis that sleep-deprived people will perform worse on a test than individuals who are not sleep-deprived."

At a Glance

A hypothesis is crucial to scientific research because it offers a clear direction for what the researchers are looking to find. This allows them to design experiments to test their predictions and add to our scientific knowledge about the world. This article explores how a hypothesis is used in psychology research, how to write a good hypothesis, and the different types of hypotheses you might use.

The Hypothesis in the Scientific Method

In the scientific method , whether it involves research in psychology, biology, or some other area, a hypothesis represents what the researchers think will happen in an experiment. The scientific method involves the following steps:

  • Forming a question
  • Performing background research
  • Creating a hypothesis
  • Designing an experiment
  • Collecting data
  • Analyzing the results
  • Drawing conclusions
  • Communicating the results

The hypothesis is a prediction, but it involves more than a guess. Most of the time, the hypothesis begins with a question which is then explored through background research. At this point, researchers then begin to develop a testable hypothesis.

Unless you are creating an exploratory study, your hypothesis should always explain what you  expect  to happen.

In a study exploring the effects of a particular drug, the hypothesis might be that researchers expect the drug to have some type of effect on the symptoms of a specific illness. In psychology, the hypothesis might focus on how a certain aspect of the environment might influence a particular behavior.

Remember, a hypothesis does not have to be correct. While the hypothesis predicts what the researchers expect to see, the goal of the research is to determine whether this guess is right or wrong. When conducting an experiment, researchers might explore numerous factors to determine which ones might contribute to the ultimate outcome.

In many cases, researchers may find that the results of an experiment  do not  support the original hypothesis. When writing up these results, the researchers might suggest other options that should be explored in future studies.

In many cases, researchers might draw a hypothesis from a specific theory or build on previous research. For example, prior research has shown that stress can impact the immune system. So a researcher might hypothesize: "People with high-stress levels will be more likely to contract a common cold after being exposed to the virus than people who have low-stress levels."

In other instances, researchers might look at commonly held beliefs or folk wisdom. "Birds of a feather flock together" is one example of folk adage that a psychologist might try to investigate. The researcher might pose a specific hypothesis that "People tend to select romantic partners who are similar to them in interests and educational level."

Elements of a Good Hypothesis

So how do you write a good hypothesis? When trying to come up with a hypothesis for your research or experiments, ask yourself the following questions:

  • Is your hypothesis based on your research on a topic?
  • Can your hypothesis be tested?
  • Does your hypothesis include independent and dependent variables?

Before you come up with a specific hypothesis, spend some time doing background research. Once you have completed a literature review, start thinking about potential questions you still have. Pay attention to the discussion section in the  journal articles you read . Many authors will suggest questions that still need to be explored.

How to Formulate a Good Hypothesis

To form a hypothesis, you should take these steps:

  • Collect as many observations about a topic or problem as you can.
  • Evaluate these observations and look for possible causes of the problem.
  • Create a list of possible explanations that you might want to explore.
  • After you have developed some possible hypotheses, think of ways that you could confirm or disprove each hypothesis through experimentation. This is known as falsifiability.

In the scientific method ,  falsifiability is an important part of any valid hypothesis. In order to test a claim scientifically, it must be possible that the claim could be proven false.

Students sometimes confuse the idea of falsifiability with the idea that it means that something is false, which is not the case. What falsifiability means is that  if  something was false, then it is possible to demonstrate that it is false.

One of the hallmarks of pseudoscience is that it makes claims that cannot be refuted or proven false.

The Importance of Operational Definitions

A variable is a factor or element that can be changed and manipulated in ways that are observable and measurable. However, the researcher must also define how the variable will be manipulated and measured in the study.

Operational definitions are specific definitions for all relevant factors in a study. This process helps make vague or ambiguous concepts detailed and measurable.

For example, a researcher might operationally define the variable " test anxiety " as the results of a self-report measure of anxiety experienced during an exam. A "study habits" variable might be defined by the amount of studying that actually occurs as measured by time.

These precise descriptions are important because many things can be measured in various ways. Clearly defining these variables and how they are measured helps ensure that other researchers can replicate your results.

Replicability

One of the basic principles of any type of scientific research is that the results must be replicable.

Replication means repeating an experiment in the same way to produce the same results. By clearly detailing the specifics of how the variables were measured and manipulated, other researchers can better understand the results and repeat the study if needed.

Some variables are more difficult than others to define. For example, how would you operationally define a variable such as aggression ? For obvious ethical reasons, researchers cannot create a situation in which a person behaves aggressively toward others.

To measure this variable, the researcher must devise a measurement that assesses aggressive behavior without harming others. The researcher might utilize a simulated task to measure aggressiveness in this situation.

Hypothesis Checklist

  • Does your hypothesis focus on something that you can actually test?
  • Does your hypothesis include both an independent and dependent variable?
  • Can you manipulate the variables?
  • Can your hypothesis be tested without violating ethical standards?

The hypothesis you use will depend on what you are investigating and hoping to find. Some of the main types of hypotheses that you might use include:

  • Simple hypothesis : This type of hypothesis suggests there is a relationship between one independent variable and one dependent variable.
  • Complex hypothesis : This type suggests a relationship between three or more variables, such as two independent and dependent variables.
  • Null hypothesis : This hypothesis suggests no relationship exists between two or more variables.
  • Alternative hypothesis : This hypothesis states the opposite of the null hypothesis.
  • Statistical hypothesis : This hypothesis uses statistical analysis to evaluate a representative population sample and then generalizes the findings to the larger group.
  • Logical hypothesis : This hypothesis assumes a relationship between variables without collecting data or evidence.

A hypothesis often follows a basic format of "If {this happens} then {this will happen}." One way to structure your hypothesis is to describe what will happen to the  dependent variable  if you change the  independent variable .

The basic format might be: "If {these changes are made to a certain independent variable}, then we will observe {a change in a specific dependent variable}."

A few examples of simple hypotheses:

  • "Students who eat breakfast will perform better on a math exam than students who do not eat breakfast."
  • "Students who experience test anxiety before an English exam will get lower scores than students who do not experience test anxiety."​
  • "Motorists who talk on the phone while driving will be more likely to make errors on a driving course than those who do not talk on the phone."
  • "Children who receive a new reading intervention will have higher reading scores than students who do not receive the intervention."

Examples of a complex hypothesis include:

  • "People with high-sugar diets and sedentary activity levels are more likely to develop depression."
  • "Younger people who are regularly exposed to green, outdoor areas have better subjective well-being than older adults who have limited exposure to green spaces."

Examples of a null hypothesis include:

  • "There is no difference in anxiety levels between people who take St. John's wort supplements and those who do not."
  • "There is no difference in scores on a memory recall task between children and adults."
  • "There is no difference in aggression levels between children who play first-person shooter games and those who do not."

Examples of an alternative hypothesis:

  • "People who take St. John's wort supplements will have less anxiety than those who do not."
  • "Adults will perform better on a memory task than children."
  • "Children who play first-person shooter games will show higher levels of aggression than children who do not." 

Collecting Data on Your Hypothesis

Once a researcher has formed a testable hypothesis, the next step is to select a research design and start collecting data. The research method depends largely on exactly what they are studying. There are two basic types of research methods: descriptive research and experimental research.

Descriptive Research Methods

Descriptive research such as  case studies ,  naturalistic observations , and surveys are often used when  conducting an experiment is difficult or impossible. These methods are best used to describe different aspects of a behavior or psychological phenomenon.

Once a researcher has collected data using descriptive methods, a  correlational study  can examine how the variables are related. This research method might be used to investigate a hypothesis that is difficult to test experimentally.

Experimental Research Methods

Experimental methods  are used to demonstrate causal relationships between variables. In an experiment, the researcher systematically manipulates a variable of interest (known as the independent variable) and measures the effect on another variable (known as the dependent variable).

Unlike correlational studies, which can only be used to determine if there is a relationship between two variables, experimental methods can be used to determine the actual nature of the relationship—whether changes in one variable actually  cause  another to change.

The hypothesis is a critical part of any scientific exploration. It represents what researchers expect to find in a study or experiment. In situations where the hypothesis is unsupported by the research, the research still has value. Such research helps us better understand how different aspects of the natural world relate to one another. It also helps us develop new hypotheses that can then be tested in the future.

Thompson WH, Skau S. On the scope of scientific hypotheses .  R Soc Open Sci . 2023;10(8):230607. doi:10.1098/rsos.230607

Taran S, Adhikari NKJ, Fan E. Falsifiability in medicine: what clinicians can learn from Karl Popper [published correction appears in Intensive Care Med. 2021 Jun 17;:].  Intensive Care Med . 2021;47(9):1054-1056. doi:10.1007/s00134-021-06432-z

Eyler AA. Research Methods for Public Health . 1st ed. Springer Publishing Company; 2020. doi:10.1891/9780826182067.0004

Nosek BA, Errington TM. What is replication ?  PLoS Biol . 2020;18(3):e3000691. doi:10.1371/journal.pbio.3000691

Aggarwal R, Ranganathan P. Study designs: Part 2 - Descriptive studies .  Perspect Clin Res . 2019;10(1):34-36. doi:10.4103/picr.PICR_154_18

Nevid J. Psychology: Concepts and Applications. Wadworth, 2013.

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • How to Write a Strong Hypothesis | Steps & Examples

How to Write a Strong Hypothesis | Steps & Examples

Published on May 6, 2022 by Shona McCombes . Revised on November 20, 2023.

A hypothesis is a statement that can be tested by scientific research. If you want to test a relationship between two or more variables, you need to write hypotheses before you start your experiment or data collection .

Example: Hypothesis

Daily apple consumption leads to fewer doctor’s visits.

Table of contents

What is a hypothesis, developing a hypothesis (with example), hypothesis examples, other interesting articles, frequently asked questions about writing hypotheses.

A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.

A hypothesis is not just a guess – it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations and statistical analysis of data).

Variables in hypotheses

Hypotheses propose a relationship between two or more types of variables .

  • An independent variable is something the researcher changes or controls.
  • A dependent variable is something the researcher observes and measures.

If there are any control variables , extraneous variables , or confounding variables , be sure to jot those down as you go to minimize the chances that research bias  will affect your results.

In this example, the independent variable is exposure to the sun – the assumed cause . The dependent variable is the level of happiness – the assumed effect .

Prevent plagiarism. Run a free check.

Step 1. ask a question.

Writing a hypothesis begins with a research question that you want to answer. The question should be focused, specific, and researchable within the constraints of your project.

Step 2. Do some preliminary research

Your initial answer to the question should be based on what is already known about the topic. Look for theories and previous studies to help you form educated assumptions about what your research will find.

At this stage, you might construct a conceptual framework to ensure that you’re embarking on a relevant topic . This can also help you identify which variables you will study and what you think the relationships are between them. Sometimes, you’ll have to operationalize more complex constructs.

Step 3. Formulate your hypothesis

Now you should have some idea of what you expect to find. Write your initial answer to the question in a clear, concise sentence.

4. Refine your hypothesis

You need to make sure your hypothesis is specific and testable. There are various ways of phrasing a hypothesis, but all the terms you use should have clear definitions, and the hypothesis should contain:

  • The relevant variables
  • The specific group being studied
  • The predicted outcome of the experiment or analysis

5. Phrase your hypothesis in three ways

To identify the variables, you can write a simple prediction in  if…then form. The first part of the sentence states the independent variable and the second part states the dependent variable.

In academic research, hypotheses are more commonly phrased in terms of correlations or effects, where you directly state the predicted relationship between variables.

If you are comparing two groups, the hypothesis can state what difference you expect to find between them.

6. Write a null hypothesis

If your research involves statistical hypothesis testing , you will also have to write a null hypothesis . The null hypothesis is the default position that there is no association between the variables. The null hypothesis is written as H 0 , while the alternative hypothesis is H 1 or H a .

  • H 0 : The number of lectures attended by first-year students has no effect on their final exam scores.
  • H 1 : The number of lectures attended by first-year students has a positive effect on their final exam scores.
Research question Hypothesis Null hypothesis
What are the health benefits of eating an apple a day? Increasing apple consumption in over-60s will result in decreasing frequency of doctor’s visits. Increasing apple consumption in over-60s will have no effect on frequency of doctor’s visits.
Which airlines have the most delays? Low-cost airlines are more likely to have delays than premium airlines. Low-cost and premium airlines are equally likely to have delays.
Can flexible work arrangements improve job satisfaction? Employees who have flexible working hours will report greater job satisfaction than employees who work fixed hours. There is no relationship between working hour flexibility and job satisfaction.
How effective is high school sex education at reducing teen pregnancies? Teenagers who received sex education lessons throughout high school will have lower rates of unplanned pregnancy teenagers who did not receive any sex education. High school sex education has no effect on teen pregnancy rates.
What effect does daily use of social media have on the attention span of under-16s? There is a negative between time spent on social media and attention span in under-16s. There is no relationship between social media use and attention span in under-16s.

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

  • Sampling methods
  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

unified hypothesis definition

A hypothesis is not just a guess — it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations and statistical analysis of data).

Null and alternative hypotheses are used in statistical hypothesis testing . The null hypothesis of a test always predicts no effect or no relationship between variables, while the alternative hypothesis states your research prediction of an effect or relationship.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, November 20). How to Write a Strong Hypothesis | Steps & Examples. Scribbr. Retrieved July 1, 2024, from https://www.scribbr.com/methodology/hypothesis/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, construct validity | definition, types, & examples, what is a conceptual framework | tips & examples, operationalization | a guide with examples, pros & cons, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

Encyclopedia Britannica

  • Games & Quizzes
  • History & Society
  • Science & Tech
  • Biographies
  • Animals & Nature
  • Geography & Travel
  • Arts & Culture
  • On This Day
  • One Good Fact
  • New Articles
  • Lifestyles & Social Issues
  • Philosophy & Religion
  • Politics, Law & Government
  • World History
  • Health & Medicine
  • Browse Biographies
  • Birds, Reptiles & Other Vertebrates
  • Bugs, Mollusks & Other Invertebrates
  • Environment
  • Fossils & Geologic Time
  • Entertainment & Pop Culture
  • Sports & Recreation
  • Visual Arts
  • Demystified
  • Image Galleries
  • Infographics
  • Top Questions
  • Britannica Kids
  • Saving Earth
  • Space Next 50
  • Student Center

flow chart of scientific method

scientific method

Our editors will review what you’ve submitted and determine whether to revise the article.

  • University of Nevada, Reno - College of Agriculture, Biotechnology and Natural Resources Extension - The Scientific Method
  • World History Encyclopedia - Scientific Method
  • LiveScience - What Is Science?
  • Verywell Mind - Scientific Method Steps in Psychology Research
  • WebMD - What is the Scientific Method?
  • Chemistry LibreTexts - The Scientific Method
  • National Center for Biotechnology Information - PubMed Central - Redefining the scientific method: as the use of sophisticated scientific methods that extend our mind
  • Khan Academy - The scientific method
  • Simply Psychology - What are the steps in the Scientific Method?
  • Stanford Encyclopedia of Philosophy - Scientific Method

flow chart of scientific method

Recent News

scientific method , mathematical and experimental technique employed in the sciences . More specifically, it is the technique used in the construction and testing of a scientific hypothesis .

The process of observing, asking questions, and seeking answers through tests and experiments is not unique to any one field of science. In fact, the scientific method is applied broadly in science, across many different fields. Many empirical sciences, especially the social sciences , use mathematical tools borrowed from probability theory and statistics , together with outgrowths of these, such as decision theory , game theory , utility theory, and operations research . Philosophers of science have addressed general methodological problems, such as the nature of scientific explanation and the justification of induction .

Earth's Place in the Universe. Introduction: The History of the Solar System. Aristotle's Philosophical Universe. Ptolemy's Geocentric Cosmos. Copernicus' Heliocentric System. Kepler's Laws of Planetary Motion.

The scientific method is critical to the development of scientific theories , which explain empirical (experiential) laws in a scientifically rational manner. In a typical application of the scientific method, a researcher develops a hypothesis , tests it through various means, and then modifies the hypothesis on the basis of the outcome of the tests and experiments. The modified hypothesis is then retested, further modified, and tested again, until it becomes consistent with observed phenomena and testing outcomes. In this way, hypotheses serve as tools by which scientists gather data. From that data and the many different scientific investigations undertaken to explore hypotheses, scientists are able to develop broad general explanations, or scientific theories.

See also Mill’s methods ; hypothetico-deductive method .

  • More from M-W
  • To save this word, you'll need to log in. Log In

Definition of hypothesis

Did you know.

The Difference Between Hypothesis and Theory

A hypothesis is an assumption, an idea that is proposed for the sake of argument so that it can be tested to see if it might be true.

In the scientific method, the hypothesis is constructed before any applicable research has been done, apart from a basic background review. You ask a question, read up on what has been studied before, and then form a hypothesis.

A hypothesis is usually tentative; it's an assumption or suggestion made strictly for the objective of being tested.

A theory , in contrast, is a principle that has been formed as an attempt to explain things that have already been substantiated by data. It is used in the names of a number of principles accepted in the scientific community, such as the Big Bang Theory . Because of the rigors of experimentation and control, it is understood to be more likely to be true than a hypothesis is.

In non-scientific use, however, hypothesis and theory are often used interchangeably to mean simply an idea, speculation, or hunch, with theory being the more common choice.

Since this casual use does away with the distinctions upheld by the scientific community, hypothesis and theory are prone to being wrongly interpreted even when they are encountered in scientific contexts—or at least, contexts that allude to scientific study without making the critical distinction that scientists employ when weighing hypotheses and theories.

The most common occurrence is when theory is interpreted—and sometimes even gleefully seized upon—to mean something having less truth value than other scientific principles. (The word law applies to principles so firmly established that they are almost never questioned, such as the law of gravity.)

This mistake is one of projection: since we use theory in general to mean something lightly speculated, then it's implied that scientists must be talking about the same level of uncertainty when they use theory to refer to their well-tested and reasoned principles.

The distinction has come to the forefront particularly on occasions when the content of science curricula in schools has been challenged—notably, when a school board in Georgia put stickers on textbooks stating that evolution was "a theory, not a fact, regarding the origin of living things." As Kenneth R. Miller, a cell biologist at Brown University, has said , a theory "doesn’t mean a hunch or a guess. A theory is a system of explanations that ties together a whole bunch of facts. It not only explains those facts, but predicts what you ought to find from other observations and experiments.”

While theories are never completely infallible, they form the basis of scientific reasoning because, as Miller said "to the best of our ability, we’ve tested them, and they’ve held up."

  • proposition
  • supposition

hypothesis , theory , law mean a formula derived by inference from scientific data that explains a principle operating in nature.

hypothesis implies insufficient evidence to provide more than a tentative explanation.

theory implies a greater range of evidence and greater likelihood of truth.

law implies a statement of order and relation in nature that has been found to be invariable under the same conditions.

Examples of hypothesis in a Sentence

These examples are programmatically compiled from various online sources to illustrate current usage of the word 'hypothesis.' Any opinions expressed in the examples do not represent those of Merriam-Webster or its editors. Send us feedback about these examples.

Word History

Greek, from hypotithenai to put under, suppose, from hypo- + tithenai to put — more at do

1641, in the meaning defined at sense 1a

Phrases Containing hypothesis

  • counter - hypothesis
  • nebular hypothesis
  • null hypothesis
  • planetesimal hypothesis
  • Whorfian hypothesis

Articles Related to hypothesis

hypothesis

This is the Difference Between a...

This is the Difference Between a Hypothesis and a Theory

In scientific reasoning, they're two completely different things

Dictionary Entries Near hypothesis

hypothermia

hypothesize

Cite this Entry

“Hypothesis.” Merriam-Webster.com Dictionary , Merriam-Webster, https://www.merriam-webster.com/dictionary/hypothesis. Accessed 1 Jul. 2024.

Kids Definition

Kids definition of hypothesis, medical definition, medical definition of hypothesis, more from merriam-webster on hypothesis.

Nglish: Translation of hypothesis for Spanish Speakers

Britannica English: Translation of hypothesis for Arabic Speakers

Britannica.com: Encyclopedia article about hypothesis

Subscribe to America's largest dictionary and get thousands more definitions and advanced search—ad free!

Play Quordle: Guess all four words in a limited number of tries.  Each of your guesses must be a real 5-letter word.

Can you solve 4 words at once?

Word of the day.

See Definitions and Examples »

Get Word of the Day daily email!

Popular in Grammar & Usage

Plural and possessive names: a guide, commonly misspelled words, how to use em dashes (—), en dashes (–) , and hyphens (-), absent letters that are heard anyway, how to use accents and diacritical marks, popular in wordplay, it's a scorcher words for the summer heat, flower etymologies for your spring garden, 12 star wars words, 'swash', 'praya', and 12 more beachy words, 8 words for lesser-known musical instruments, games & quizzes.

Play Blossom: Solve today's spelling word game by finding as many words as you can using just 7 letters. Longer words score more points.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Indian J Otolaryngol Head Neck Surg
  • v.65(Suppl 2); 2013 Aug

Logo of ijoto

Chronic Rhino-Sinusitis and Asthma: Concept of Unified Airway Disease (UAD) and its Impact in Otolaryngology

Rakesh singh meena.

J. L. N. Medical College, Ajmer, Rajasthan India

Deepali Meena

Yogesh aseri, b. k. singh, p. c. verma.

The aim of our study is to understand the concept of unified airway disease, to know the advantage of this concept in the diagnosis and treatment of allergic rhinitis, chronic rhino-sinusitis and asthma, to know its impact on practice of otolaryngologists, to motivate the otorhinolaryngologist to apply this concept in diagnosis and treatment. This article is based on our experience on (20 cases) chronic rhino-sinusitis and asthma, and observations and results from various literatures. Implement of the concept of unified airway disease and ability to translate its principles into successful diagnostic and treatment strategies can enhance the practice of otolaryngology. The end result is the potential for improved patient care. In our study 80% cases have reduced frequency of symptoms and all (100%) cases having improved night time symptoms thus the use of short-acting beta2 agonist to control the asthma symptoms decreases.

Introduction

Until recently, rhinitis and asthma have been evaluated and treated as separate disorders, but current opinion has moved toward the concept of unifying the management of these disorders. The unified airway disease (UAD) hypothesis purposes that upper and lower airway diseases are both are infections of a single inflammatory process within the respiratory tract. Synonyms of UAD include allergic rhino bronchitis and combined allergic rhino-sinusitis and asthma. IgE mediated allergic responses to inhaled allergens cause symptom of asthma and rhino-sinusitis, but there is increasing evidence of a systemic link between the lower and upper airway.

Concept of Airway

Over the course of the past two decades, the concept of inflammation involving both the upper and lower airways has become increasingly recognized and studied. When examined, asthma, allergy, and rhino-sinusitis appear to behave similarly and in conjunction with one another in many cases, suggestive of an integration of the involved areas of the airway. This pattern of similarities has given rise to the concept of the unified airway model, which, simply stated, considers the entire respiratory system to represent a functional unit that consists of the nose, paranasal sinuses, larynx, trachea, and distal lung [ 1 ]. The broad number of inflammatory diseases that occur within this functional unit present to a variety of specialties, including otolaryngology, pulmonology, primary care, and allergy [ 11 ]. Similarly, literature related to this concept distributed among the literature of each of these specialties.

The model of unified airway provides conceptual framework for understanding and managing patients who have both upper and lower airway inflammatory disease. Through appreciating the relationships that exist among diseases such as otitis media, allergic rhinitis, acute and chronic rhino-sinusitis, and asthma, physicians can be more through in their diagnosis and treatment of patients who have airway disorders and can implement effective treatment strategies to decrease the burden and symptomatic expression of disease.

Criteria in Support of a Unified Airway

Patients with upper airway diseases have a higher prevalence of lower respiratory disease such as asthma, the corollary, increased prevalence of upper respiratory diseases is also found among patients with lower respiratory disease.

Interrelated patho physiological mechanism between upper and lower airway diseases exist to explain the interaction of these two disease processes treatment of one portion of the unified airway improves the symptoms in a lower portion of the respiratory system.

Proposed pathophysiological links between the upper and lower airways:

  • Systemic interaction (inflammatory crosstalk): The mechanism by which communication occurs between the upper and lower airways is suggested to be via systemic inflammatory response (IL-4, IL-5, IL-13, Eotaxin, etc.,) [ 1 , 12 ].
  • Loss of conditioning by nose and PNS : Allergic rhinitis has adverse effects on the lower airway by the promotion of breathing through the mouth. High nasal nitric oxide concentrations (up to 100 times greater than in orally exhaled air) are thought to have antiviral, bacteriostatic, and bronchodilator effects on the lower airway (conditioning of inspired air).
  • Nasobronchial reflex has also been suggested (e.g., transient bronchoconstriction resulting from irritant stimulation of nasal mucosa).
  • Pharyngobronchial reflex : Irritation of the hypopharynx with sinus secretions leads to bronchoconstriction and reduction in airflow rates.

Materials and Methods

This was prospective study of 20 cases (15–60 years age group) of asthma (diagnosed by physician) having chronic rhino-sinusitis (means nasal sign/symptoms of more than 12 weeks e.g., nasal discharge, blockage, post nasal drip, hyposmia/anosmia with or without facial pain/pressure/fullness/headache). These patients were sent to ENT OPD to treat the nasal symptoms which were diagnosed as rhino-sinusitis with asthma. rhino-sinusitis was diagnosed by clinical features and radiological investigations (e.g., X-ray PNS waters view/CT scan if required).

Asthma was diagnosed by following clinical features:

  • Shortness of breath
  • Coughing in cold and night
  • Chest tightness

All 20 cases were categorized in four groups ((1) intermittent, (2) mild persistent, (3) moderate persistent, (4) severe persistent), depending on frequency of asthma symptoms/night time symptoms/use of beta2 agonist for control of symptoms. Than rhino-sinusitis was treated medically (in 18 cases) and both medically and surgically (FESS in 2 cases). After 6 months of rhino-sinusitis treatment we compared the asthma symptoms and need of asthma medication before and after treatment of rhino-sinusitis and conclusion was made.

Observations and Results

Table  1 showing that all 20 cases were grouped in four groups depending upon severity of symptoms and requirement of short-acting beta2 agonist for symptom control. All patients in an individual group having equal severity of symptoms (depending upon frequency of symptoms and night time symptoms) so required the same amount of beta2 agonist for control of symptoms.

Table 1

Before treatment of rhino-sinusitis the symptoms severity of asthma and requirement of short-acting beta2 agonist

Severity in patients (no. of patients)Symptom frequencyNight time symptomsUse of short-acting beta2 agonist for symptom control
Intermittent (5)≤2 per week≤2 per month≤2 days per week
Mild persistent (7)>2 per week but not daily3–4 per month>2 days/week but not daily
Moderate persistent (6)Daily>1 per week but not nightlyDaily
Severe persistent (2)Throughout the dayFrequent (often 7 × /week)Several times per day

All the four tables (Tables 2–5) representing the different four groups of patients after medical and surgical (required in 2 cases) treatment of rhino-sinusitis. Tables  2 , ​ ,3, 3 , ​ ,4, 4 , ​ ,5 5 showing that frequency of symptoms decreases in 80% (in 16 out of 20 cases) and night time symptoms improved in all 20 cases (100%) thus use of beta2 agonist also decreases in all groups.

Table 2

Intermittent asthma (five cases) after treatment of rhino-sinusitis

No. of patients (5)Symptom frequencyNight time symptomsUse of short-acting beta2 agonist for symptom control
1<Once/week<Once/month<2 days/week
2<Once/week<Once/month<2 days/week
3Once/weekOnce/month<2 days/week
4Once/weekOnce/month<2 days/week
5Twice/weekOnce/month≤2 days/week

The table shows that only in one case symptom frequency not improved

Table 3

Mild persistent asthma (seven cases) after treatment of rhino-sinusitis

No. of patients (7)Symptoms frequencyNight time symptomsUse of short-acting beta2 agonist for symptom control
1Once/week<Once/month<2 days/week
2Once/week<Once/month<2 days/week
3Once/week<Once/month<2 days/week
4Twice/weekOnce/month≤2 days/week
5Twice/weekOnce/month≤2 days/week
6>Twice/week<Twice/month>2 days/week but not daily
7>Twice/week<Twice/month>2 days/week but not daily

The table shows that in two cases symptom frequency not improved

Table 4

Moderate persistent asthma (six cases) after treatment of rhino-sinusitis

No. of patients (6)Symptoms frequencyNight time symptomsUse of short-acting beta2 agonist for symptom control
1Once/week<Twice/month<2 days/week
2Once/week<Twice/month<2 days/week
3Once/week3–4/month<2 days/week
4>Twice/week3–4/month>2 days/week but not daily
5>Twice/week3–4/month>2 days/week but not daily
6Daily3–4/monthDaily

The above table show that only in one case symptom frequency did not improved

Table 5

Severe persistent asthma (two cases) after medical and surgical treatment of rhino-sinusitis

No. of patients (2)Symptoms frequencyNight time symptomsUse of short-acting beta2 agonist for symptom control
1>Twice/week3–4/month>2 days/week but not daily
2>Twice/week3–4/month>2 days/week but not daily

Correlation Between Asthma and CRS

Our experience with 20 cases (15–60 years age group) is that when rhino-sinusitis is appropriately treated (medically 18 cases or surgically FESS were done in 2 cases) there was significant improvement in frequency and severity of asthma symptoms (e.g., (1) Shortness of breath, (2) Wheezing, (3) Coughing in cold and night, and (4) Chest tightness) and there was decrease need of asthma medication in all cases when rhino-sinusitis has been controlled.

In other recent study, 100% of subjects with severe asthma (requiring steroid treatment) had abnormal sinus computed tomography scans versus 77% of subjects with mild to moderate asthma. Initial associations among these diseases were noted due to the concurrence of these disease processes and were later more objectively established by way of epidemiologic studies. The simple coexistence of rhinitis and asthma, as an example, was demonstrated by Corren [ 2 ], who noted that nasal symptoms were suffered by approximately 78% of a large group of patients with asthma. In another classic paper, the Finnish twin cohort study [ 3 ], more than 11,000 patients were followed longitudinally to assess whether the presence of allergic rhinitis was associated with the development of other respiratory diseases over time. Questionnaires were administered in 1975, 1981, and 1990 and revealed a fourfold increase in asthma reporting at the end of the study in subjects with hay fever over normal control subjects. In 2002, Guerra et al. [ 4 ] corroborated these findings after following 1,655 patients with allergic rhinitis and 2,177 normal controls over a 20-year period. As in the previous study, sufferers of allergic rhinitis were approximately three times more likely to develop asthma than were the controls.

Similar relationships have been identified among other allergic and non-allergic respiratory diseases. Anecdotal associations between asthma and rhino-sinusitis have been reported for more than 70 years [ 5 ]. The prevalence of asthma, for instance, in patients with chronic rhino-sinusitis (approximately 20%) is consistently noted to be much greater than that observed in the general population (5–8%) [ 6 ], and in those patients who undergo endoscopic sinus surgery the prevalence climbs even higher, to approximately 42% [ 7 ]. This same association exists between chronic rhinitis and allergic rhinitis, as shown by a cohort of patients with the diagnosis of recurrent acute or chronic rhino-sinusitis who were followed within a major health care system. In this study, patients diagnosed with chronic rhino-sinusitis demonstrated a 57% prevalence of positive in vitro or allergy skin testing [ 8 ]. The overlap and interrelationship of these respiratory diseases become less surprising as definitions of disease and underlying pathophysiologies come into better focus. The concept of asthma as a chronic inflammatory disease emerged in 1991 as a result of a report by the National Institute of Health and the National Heart, Lung, and Blood Institute [ 9 ]. With this report, the pathophysiological focus of asthma shifted from bronchospasm to one of inflammation that is mediated at the cellular level. The implications of this report were monumental and resulted in a major shift in treatment strategy for the disease. It was not until later that similar observations resulted in refinements in the definition of chronic rhino-sinusitis. In 2003, a definition of chronic rhino-sinusitis was introduced that emphasized the pathogenic role of inflammation in the disease [ 10 ].

Treatment Approaches can be Divided into

Environmental controls of allergy.

Environmental controls of allergy remain a cornerstone in the management of the patients who have allergic rhinitis. Reduction of antigen quantity is, however only an indirect measure of where an environmental control strategy actually reduces allergic symptoms. Strategies for reduction of indoor inhalants allergens (dust, mite cockroach, molds), as well as techniques for reducing exposure to outdoor inhalant allergens should be encouraged.

Treatment with Drugs

Antihistaminic.

They control rhinorrhoea, sneezing, and pruritus

Sympathomimetic Drugs (oral or topical)

Alpha adrenergic drugs constricts of blood vessels and reduce nasal congestion and edema.

Corticosteroids

Limited to acute episodes which have not been controlled by other measures. They have several systemic side effects. Topical steroids are used as aerosols and are very effective in control of symptoms.

Sodium Chromoglycate

It stabilizes the mast cells and prevents them from degranulation despite the formation of IgE complex.

Immunotherapy

Immunotherapy or hyposensitisation is used when drug treatment fails to control the symptoms or produce intolerable side effects. Allergen is given in increasing doses till the maintenance dose is reached. Immunotherapy reduces the formation of IgE. It also raises the titer of IgE antibody. Immunotherapy has to be given for a year or so before a significant improvement of symptoms can be noticed.

Functional endoscopic sinus surgery done from CRS management has demonstrated to result in the decreased need for asthma medications, improved pulmonary function, and fewer asthma exacerbations.

Treatment Effects in the Unified Airway

Several papers have shown that treating allergic rhinitis with intra nasal corticoid sprays can improve both asthma symptoms and objective indices of pulmonary functions [ 13 , 14 ]. Both oral antihistaminic and oral leukotriene receptor antagonist have shown similar effects. In addition these treatment effects can be translated into direct social impact, in that patient with concurrent allergic rhinitis and asthma who have treatment for their nasal disease demonstrate decreased incidence of hospitalization and emergency department visits for asthma when compared with patients not receiving or not adherent to rhinitis treatment.

Successful management of chronic sinus disease has been demonstrated to result in decreased need for asthma medications, improved pulmonary functions, and fewer asthma exacerbations.

Evidence of concurrent benefit to both the upper and lower airways with immunotherapy further strengthens the observations that system wide airway effects can be noted with proper therapeutic interventions.

The impact of these various treatment studies supports the hypothesis that the treatment of airway inflammation can have benefit in the management of the unified airway as a systemic whole.

  • Otorhinolaryngologist frequently treat individuals who have upper airway disease such as allergic rhinitis and acute and chronic rhino-sinusitis and therefore are positioned uniquely to identify, monitor, and manage patients currently symptomatic from or at risk of developing concurrent asthma.
  • Thus an understanding of the concept of the unified airway and ability to translate its principles into successful diagnostic and treatment strategies can enhance the practice of otolaryngology.
  • Management with this concept can lead to improved patient outcomes and quality of life.

COMMENTS

  1. The Unified Theory in a Nutshell

    The Unified Approach argues the former is much more likely if we can link our large scale justification systems together and coordinate the change in a manner consistent with the human condition ...

  2. Gaia hypothesis

    The Gaia hypothesis posits that the Earth is a self-regulating complex system involving the biosphere, the atmosphere, the hydrospheres and the pedosphere, tightly coupled as an evolving system. The hypothesis contends that this system as a whole, called Gaia, seeks a physical and chemical environment optimal for contemporary life.

  3. The Conceptual Unification of Psychology

    Biology is a unified discipline precisely because it has a clear, well-established definition (the science of Life), an agreed upon subject matter (organisms), and a theoretical system that ...

  4. Depressive Disorders: Toward a Unified Hypothesis

    Our scientific understanding of psychiatric syndromes, including the phenomena of depression, has been hampered because of: (i) the use of metapsychological concepts that are difficult to test; (ii) methodological and linguistic barriers that prevent communication among psychoanalysts, behaviorists, experimental psychologists, and psychiatrists ...

  5. The Unity of Science

    A classic reference to this compositional type of account is Oppenheim and Putnam's "The Unity of Science as a Working Hypothesis" (Oppenheim and Putnam 1958; Oppenheim and Hempel had worked in the 1930s on taxonomy and typology, a question of broad intellectual, social and political relevance in Germany at the time).

  6. Gaia hypothesis

    Gaia hypothesis, model of the Earth in which its living and nonliving parts are viewed as a complex interacting system that can be thought of as a single organism. Developed c. 1972 largely by British chemist James E. Lovelock and U.S. biologist Lynn Margulis, the Gaia hypothesis is named for the Greek Earth goddess. It postulates that all living things have a regulatory effect on the Earth ...

  7. Research Hypothesis In Psychology: Types, & Examples

    A research hypothesis, in its plural form "hypotheses," is a specific, testable prediction about the anticipated results of a study, established at its outset. It is a key component of the scientific method. Hypotheses connect theory to data and guide the research process towards expanding scientific understanding.

  8. Hypothesis: Definition, Examples, and Types

    A hypothesis is a tentative statement about the relationship between two or more variables. It is a specific, testable prediction about what you expect to happen in a study. It is a preliminary answer to your question that helps guide the research process. Consider a study designed to examine the relationship between sleep deprivation and test ...

  9. Gaia Hypothesis

    Gaia Hypothesis. P.J. Boston, in Encyclopedia of Ecology, 2008 Introduction. The Gaia hypothesis, named after the ancient Greek goddess of Earth, posits that Earth and its biological systems behave as a huge single entity.This entity has closely controlled self-regulatory negative feedback loops that keep the conditions on the planet within boundaries that are favorable to life.

  10. How to Write a Strong Hypothesis

    Developing a hypothesis (with example) Step 1. Ask a question. Writing a hypothesis begins with a research question that you want to answer. The question should be focused, specific, and researchable within the constraints of your project. Example: Research question.

  11. The scientific method (article)

    The scientific method. At the core of biology and other sciences lies a problem-solving approach called the scientific method. The scientific method has five basic steps, plus one feedback step: Make an observation. Ask a question. Form a hypothesis, or testable explanation. Make a prediction based on the hypothesis.

  12. Unity of science

    Unity of science. The unity of science is a thesis in philosophy of science that says that all the sciences form a unified whole. The variants of the thesis can be classified as ontological (giving a unified account of the structure of reality) and/or as epistemic /pragmatic (giving a unified account of how the activities and products of ...

  13. Climate

    Climate - Gaia Hypothesis, Earth System, Biosphere: The notion that the biosphere exerts important controls on the atmosphere and other parts of the Earth system has increasingly gained acceptance among earth and ecosystem scientists. While this concept has its origins in the work of American oceanographer Alfred C. Redfield in the mid-1950s, it was English scientist and inventor James ...

  14. Scientific method

    The scientific method is an empirical method for acquiring knowledge that has characterized the development of science since at least the 17th century. The scientific method involves careful observation coupled with rigorous scepticism, because cognitive assumptions can distort the interpretation of the observation.Scientific inquiry includes creating a hypothesis through inductive reasoning ...

  15. Full article: Concepts as a working hypothesis

    the hypothesis that concepts are prototype ... if there is a single, unified notion of a concept, etc., are to be answered by comparing different kinds of (appeals to) cognitive science according to some set of criteria; e.g ... I define cognitive science as the interdisciplinary study of mind embracing philosophy, psychology, artificial ...

  16. Unified Theories

    UNIFIED THEORIESThe quest for unification has been a perennial theme of modern physics, although it dates back many millennia. The belief that all physical phenomena can be reduced to simple elements and explained by a small number of natural laws is the central tenet of physics, indeed of all science. One of the first unifying scientific principles was the atomic hypothesis, beautifully ...

  17. Scientific method

    scientific method, mathematical and experimental technique employed in the sciences. More specifically, it is the technique used in the construction and testing of a scientific hypothesis. The process of observing, asking questions, and seeking answers through tests and experiments is not unique to any one field of science.

  18. Hypothesis Definition & Meaning

    hypothesis: [noun] an assumption or concession made for the sake of argument. an interpretation of a practical situation or condition taken as the ground for action.

  19. Competitive intelligence: A unified view and modular definition

    A unified CI definition will undoubtedly provide a much-needed baseline reference for organizations and practitioners. ... The ACH focused on testing the hypothesis for the core defining dimensions. The ACH is an eight-step procedure based on fundamental insights from cognitive psychology, decision analysis, and the scientific method. ...

  20. Unified field theory

    Unified field theory. In physics, a unified field theory ( UFT) is a type of field theory that allows all that is usually thought of as fundamental forces and elementary particles to be written in terms of a pair of physical and virtual fields. According to modern discoveries in physics, forces are not transmitted directly between interacting ...

  21. Chronic Rhino-Sinusitis and Asthma: Concept of Unified Airway Disease

    The unified airway disease (UAD) hypothesis purposes that upper and lower airway diseases are both are infections of a single inflammatory process within the respiratory tract. Synonyms of UAD include allergic rhino bronchitis and combined allergic rhino-sinusitis and asthma. ... In 2003, a definition of chronic rhino-sinusitis was introduced ...

  22. HYPOTHESIS Definition & Meaning

    Hypothesis definition: a proposition, or set of propositions, set forth as an explanation for the occurrence of some specified group of phenomena, either asserted merely as a provisional conjecture to guide investigation (working hypothesis ) or accepted as highly probable in the light of established facts.. See examples of HYPOTHESIS used in a sentence.

  23. Unified theory of acceptance and use of technology

    The unified theory of acceptance and use of technology (UTAUT) is a technology acceptance model formulated by Venkatesh and others in "User acceptance of information technology: Toward a unified view". The UTAUT aims to explain user intentions to use an information system and subsequent usage behavior. The theory holds that there are four key constructs: 1) performance expectancy, 2) effort ...