Philosophy of Science Series: Scientific Models
Foreword
The Philosophy of Science series explores both general questions about the nature of science and specific foundational issues related to the individual sciences. When applied to such subject areas, philosophy is particularly good at illuminating our general understanding of the sciences. This series will investigate what kinds of serious—often unanswered—questions a philosophical approach to science exposes through its heuristic lens. This series, more specifically, will look at the ‘Scientific Realism’ debate throughout, which questions the very content of our best scientific theories and models.
Philosophy of Science Series will be divided into the following chapters of content:
6. Philosophy of Science Series: Scientific Models
Philosophy of Science Series: Scientific Models
As discussed in the previous—most recent—article of this series, the study of causation (i.e., what may account for causation and the relation between a cause c and an effect e) is central to the philosophy of science. There are many different types of accounts and analyses of causation which also further explore various causal models. Many scientific models are in fact representational models here since they (supposedly) can represent a selected part or aspect of the world such as causation, known as the model’s target system (Frigg & Hartmann, 2006). How might a scientific model represent a target system, though? How does a scientific model make a particular part or feature of the world easier to understand, visualise, quantify, define, or simulate? Further, how can a scientific model reference existing and commonly accepted "knowledge" of the given target system? (von Neumann, Taub & Taub, 1963). This article will delve into these very questions, thus exploring another significant and influential part of the philosophy of science enterprise. To this end, this part of the series will (a) introduce the philosophical questions surrounding scientific models and their aims, (b) consider different types of models, and finally (c) look at scientific representation in modelling. This is with a view to investigating models of explanation in the following article of the series.
The Basic Idea: Models in Science Matter
Modelling is a crucial part of scientific practice and requires selecting and identifying relevant target systems (from the real world) to then develop a model which replicates such a system with those features. Consider a few examples of models which highlight why and how this endeavour is an inseparable part of science:
Bohr’s (1913) model of the atom
The Lorenz (1967) system model of the atmosphere
Billiard Ball model of a gas (Egger & Carpi, 2008).
This list may be extended ad nauseum, providing cases in point of the central importance of models in many scientific contexts. This importance has notably been increasingly recognised by philosophers too, with philosophical literature on models growing rapidly over the last decades in line with the number of different types of models (Frigg & Hartmann, 2006). As mentioned, such models often represent their target system (i.e., a particular part or aspect of the world). The atom in Bohr’s (1913) model is the target system, for example, just like the atmosphere is the target system of the Lorenz (1967) model. The same goes for target systems which scientists cannot perform experiments on. It is not possible to perform certain experiments, for instance if the target is too far away (e.g., stars), too large to intervene on (e.g., the solar system), perhaps ethically irresponsible to intervene on (e.g., heart function), or even impossible to intervene on due to the nature of the system (e.g., the stock market). Target systems are often difficult—or are not possible—to experiment on, hence the reaction is to build a model of the system and to study it. Scientists may study the model in order to learn about the model’s target (whether the target is easily "intervenable" or not). One can learn about gravity arising from the Sun by studying the Newtonian model of the solar system, for example. One can also learn about some of the unpredictable weather via the Lorenz system model. Likewise, one may learn all about predator and prey interaction in the Lotka-Volterra model (Zhu & Yin, 2009). Since models are representations of their target systems, one can learn from them, therefore giving rise to many philosophical questions on scientific models as such.
Major philosophical questions in this area concern a model’s ability to represent something and yield knowledge as a result. When model assumptions are false—and it must be noted that some of them are dramatically false—one can draw minimal lessons. False representation produces the kinds of models that are not descriptions of facts. Yet, models are supposed to tell us something about the world. Serious philosophical questions must therefore be asked about what it is exactly a model represents and how it does so (as this article will discuss specifically later). Ontological questions also arise since it is important to understand what a model is. For example, what is the famous Fibonacci model? What does it consist of? Is it the equation? Is it the sequence? Or is it the model assumptions? (Frigg & Hartmann, 2006). Somewhat similarly, what is it in a model that provides truth? The philosopher of science must ask what kind of internal structure can generate results in a model, especially when some claims are true and some are false in the same model. Further epistemological questions are asked here too, for the philosopher must suss out how to learn about a model and about what is true in a model. Models are notably quite similar to theories in science, so the similarities and differences between the two must also be pointed out. To use Fibonacci as an example again, the model is independent from the theory (Frigg, 2002). Not all models are like this, however, and models are in fact often related to theories in several ways (Bailer-Jones, 2002). On top of ontological, epistemological, and semantic questions in philosophy concerning scientific models, other topics in and around the philosophy of science crop up too. Questions concerning the explanatory power of models arise, for instance, as do questions on the use of models in the scientific realism debate (discussed earlier in articles 2, 3, and 4 of the series). The types of philosophical questions asked—and how they are answered—of course depend on the particular model in question in the first place. There are three "main" types of models, as this article will explore before investigating the problem of representation specifically.
Three Fundamental Kinds of Models
Scientific models, roughly, can be understood as representations. There are three fundamental types of such models concerning what is being represented (Treagust, Chittleborough, & Thapelo, 2002):
Models of phenomena
Models of theory
Models of data.
Models of phenomena have been discussed, albeit briskly and without mentioning the "model of phenomena" title. This type of model is a model of a selected part or aspect of the world—a phenomenon (also known as a "target system"). Bohr’s (1913) model of the atom is therefore a good example, with the atom being the particular aspect (or phenomenon) of the world that the model is representing. By the same token, the atmosphere is the target system in the Lorenz model. Models of theory then differ, bringing logic into the picture. Whereas a theory is a set of sentences in a formal language, a model (of theory) is a structure that makes all the sentences of the theory true (Frigg & Hartmann, 2006). Consider a simple example:
Theory T: ∀x (Fx → Gx)
Model:
- The set S = {S1, … S100} consisting of all objects in a room.
- Let F predicate “is a wall”.
- Let G predicate “is painted white”.
Sentence ∀x (Fx → Gx) is true in S. Therefore, S is a model of the theory T.
Now consider a more serious example from Euclidean Geometry. Here, a structure is a ‘model’ in the sense that the model is what the theory is about. Sometimes it is said that logical models are an interpretation of the theory, or that they satisfy the theory (Frigg, 2002). Yet, such a model S is not itself about anything. It is just a set of objects. Hence, models in the logical sense are not ipso facto models of phenomena (Putnam, 1969). One could argue that models are multi-functional, however, since many models in science are both models of phenomena and theory in various respects. The Newtonian Model of the Sun-Earth system (Pal, Abouelmagd, & Kishor, 2021), for instance, where the model satisfies the Newton Theory of Motion (being a model of theory) whilst representing a target system (i.e., the sun-earth system – being a model of phenomena). This is unfortunately untenable as a general account since (a) it is not universal and (b) due to relations between models (i.e., the type of fact that is true or false of two models but not usually straightforward to identify), but multi-functional models are nevertheless interesting to consider (and notably deserve attention elsewhere).
The final type refers to models of data. Indeed, empirical observations sometimes provide evidence in the form of data points. Raw data is of course corrected, rectified, and developed. Models of data are thus formed from data points and their patterns. A hypothetical example might be the kind of model formed based on data of Venetian Sea levels (Sober, 2001). A certain "pattern" identified in data on flooding might help predict when the next flood will occur, for example. The model is formulated via linear regression where a curve through the data presents an inclined straight line. There is no ‘physical modelling’ included here as such, but a model of data is the result (helping to predict future flooding) (Sober, 2001). The number of different types of models recognised has increased as the study of scientific models has grown in the philosophy of science. Questions and issues around model representation are therefore of particular interest in the philosophy of science.
Scientific Representation: The Problem
The problem in question is as follows: in virtue of what is a model a representation of something else? More formally, the problem is what fills the blank in:
M is a scientific representation of T iff (if and only if) ____________ where "M" stands for "model" and "T" for "target system" (Frigg & Nguyen, 2017).
There are some conditions of adequacy to consider briefly first, such as learning from models. Representation must be such that it allows to derive claims about the target system from the model. Learning about future flooding from a model of data on Venetian Sea levels, for instance, as mentioned (Sober, 2001). Moreover, an account of representation must allow for misrepresentation. Directionality is important too, for there is an essential directionality which has to be explained (Frigg & Hartmann, 2006).
On top of maintaining various conditions of adequacy, now also consider what representation is not. Representation, crucially, is not a mirror image of a target system. This proposed definition is both wrong and misleading, for mirror images are alike whereas representations are not. There are instead lots of different representations for the same object which may warrant different inferences. Failure to take this into account can lead to serious mistakes since there are so many different kinds of representations in the sciences. Models of strangelets (i.e., hypothetical theorised cosmological objects composed of an exotic form of matter known as strange matter or quark matter) (Anissimov, 2023), for example, include the Liquid Drop Model and Shell Model (Madsen, 1994). The two are vastly different to one another and showcase why models should not be understood as mirror images. Representation does not imply "mirror image", just like science is not a copy of the world. Hence this is what representation is not. But what is it? Various accounts of representation (by scientific models) are discussed next.
Scientific Representation: Similarity and Isomorphism Accounts
To reiterate, representations are not mirror images. Some accounts, however, hold that similarity and representation initially appear to be two closely related concepts (Frigg & Nguyen, 2017). Interestingly, this idea of similarity to ground representation even has a philosophical lineage stretching back—at least—as far as Plato’s The Republic (Allen, 2006). There are numerous versions of the similarity account of representation such as:
A model M is a scientific representation of a target system T iff M and T are similar.
M is a scientific representation of T iff a model user provides a theoretical hypothesis H specifying that M and T are similar to one another in relevant respects and to relevant degrees (Giere, 2004).
Clearly, account (2) develops (1). Overall, similarity accounts work by exploiting similarities between a model and that aspect of the world it is being used to represent (Giere, 2004). The doubt with account (1) is of course that mere similarity is not enough to ground representation. Indeed, everything is similar to everything else because any two items share some property. Assume now that these problems could be solved somehow, for example by narrowing down "allowable" kinds of similarity. Then, recall the conditions of adequacy: learning from models, misrepresentation, and directionality. The learning condition is firstly met here because if one understands that M is similar to T, and M has a certain property P, then one can infer that T has a similar property. Account (1), however, does not meet the misrepresentation condition. If something misrepresents then it fails to be similar, yet something that is not similar is not a representation at all according to (1). The third and final condition of adequacy is also unmet since similarity is symmetrical: if A is similar to B, then B is similar to A. Representation is not symmetrical though: if M represents T, then T does not represent M (usually). Hence why (1) fails to explain the directionality of representation. Hence also why many argue for a more developed account like (2) with the inclusion of an intentional agent.
Ronald Giere’s (2004) similarity account of representation (account 2) rethinks a similarity account like that of (1) since a model user provides H specifying that M and T are similar to one another in relevant respects and to relevant degrees. Giere’s account is notably prominent amongst similarity accounts of representation. More recently, Giere (2010) has sought to defend the similarity account by explicitly invoking the role played by scientists—model users—using a scientific model (Toon, 2012). Appealing to agents and their representational capacities offers a promising way to defend the similarity account. Giere (2004), interestingly, proposes a shift away from a traditional focus on representation to the activity of representing. As Adam Toon (2012) puts it:
S uses X to represent W for purposes P, where S may be an individual scientist, a scientific group, or a larger scientific community, W is an aspect of the real world, and X is a representational device. While X might be a diagram, graph, or some other form of representational device, it is models that are primary (though by no means the only) representational tools in the sciences. (p. 246)
Giere’s proposal is therefore that models do not represent "on their own" so to speak, but only because of what scientists do with them. Likewise, in assessing individual cases, one must ask not "is this object a model-representation?" but "is this object-used-in-this-particular-way a model representation?" (Toon, 2012). Giere, overall, thus offers an account which stresses the way in which scientists exploit similarities between models and the world. This use of an intentional agent (the scientist) is therefore a development of account (1) and overcomes problems concerning mere similarity. Consider how Giere (2004) introduces his account:
…I am not saying that the model itself represents an aspect of the world because it is similar to that aspect. There is no such representational relationship. Anything is similar to anything else in countless respects, but not anything represents anything else. It is not the model that is doing the representing; it is the scientist using the model who is doing the representing. (pp. 747-8)
Mere similarity does not prove to be such a problem like with account (1). Especially since scientists pick out specific features of the model that they can claim are similar to features of the designated target system to some degree of fit (Giere, 2004). Indeed, Giere calls statements specifying the similarities between model and system theoretical hypotheses (Toon, 2012). For the Newtonian model of the solar system, for instance, the theoretical hypotheses concern the positions and velocities of the earth and moon in the earth-moon system which are very close to those of a two-particle Newtonian model with an inverse square central force (Giere, 2004). Hence why a model does not simply represent T because it is similar to T—like account (1) suggests—but scientists instead use the model to represent the system by exploiting similarities via theoretical hypotheses.
It is beyond the scope of this article to investigate whether an account like Giere’s entirely overcomes the problems that other (oftentimes simpler) similarity accounts of representation encounter, though account (2) is less naïve than that of (1). Moreover, it is not necessarily true that all forms of scientific representation involve similarity between M and T. One might even adopt a different kind of account because of this, namely a structuralist account of representation. Though related to similarity accounts like above, the structuralist account proposes "similarity" somewhat differently. One might decide to view the structuralist account either as (a) a special version of the similarity account (but similarity with respect to structure), or (b) an independent account. First consider what "structure" refers to (where S is the structure, D is the domain of objects, and R stands for "relations"):
S = < D, R >
Notably, different objects can have the same structure. Very loosely then, if two objects share the same structure then they are isomorphic. This very idea may be used to define an account of representation, which is that M is a scientific representation of T iff M and T are isomorphic (Frigg, 2002). Indeed, the objects that serve as models belong to different ontological kinds and are often set-theoretic structures (Frigg & Hartmann, 2006), adding somewhat to the desirability and plausibility of the account. Regardless of ontology, however, the "isomorphic" account holds that it is a (shared) structure (between M and T) that may account for representation. Hence this is why an isomorphism account is rather like a similarity account of representation. Perhaps similarity in general is irrelevant when a model represents a target system, though?
Scientific Representation: Representation-as by Goodman and Elgin
This article now finishes by discussing a final—and quite promising—account of how to think about the representational relationship between models and the world. This representation-as is an account that emerges from the work of Nelson Goodman and Catherine Z. Elgin (Nguyen & Frigg, 2017). On Goodman and Elgin’s account, one can think of representation much like how Margaret Thatcher is represented as a sand timer in her caricature on The Economist cover (in figure 7 below). Indeed, scientific models represent—very roughly—in the same way. Figure 8 below, the Kendrew Model of myoglobin, is yet another example of this kind of representation whereby myoglobin is represented as a plasticine type structure on sticks. Consider the notation below before introducing Elgin’s (2009) definition of representation:
X – the object that does the representing (for instance, the caricature drawing of Margaret Thatcher)
Y – the real-world target of the representation (Margaret Thatcher herself in this instance)
Z – the kind of a representation (a sand timer in this instance).
Elgin’s (2009) definition is then as follows: when X represents Y as Z, it is because X is a Z representation that denotes Y as it does. X does not merely denote Y and happen to be a Z representation. Rather in being a Z representation, X exemplifies certain properties and imputes those properties or related ones to Y.
To discuss further the representational relationship between models and their targets being one of representation-as requires adding specificity to the various definitions above, namely denotation and exemplification. First, denotation (broadly speaking) is a two-place relation between a symbol and the object to which it applies (Nguyen and Frigg, 2017). NASA, for example, stands for National Aeronautics and Space Administration and denotes the U.S. federal government responsible for the civil space program, aeronautics research, and space research (Bilstein, 1996). Pictures, equations, charts, and graphs (the list could go on) indeed are representations of things they denote. On this note, Goodman (1976) claims that we are often misled by ordinary language into believing that something is a representation only if there is something in the world that it represents. Distinguish between (1) pictures of a unicorn and (2) unicorn pictures, for example. More generally, distinguish between (1) pictures of a Z and (2) Z-pictures. One does not imply the other, argues Goodman (1976). In sum, a picture of a Z denotes a Z, but without necessarily showing a Z. Also, a Z-picture shows a Z, but without necessarily denoting a Z. Hence, some Z-representations denote a Z and others do not. Likewise, some pictures of a Z are Z-pictures and others are not. Europe is an example of territory-representation, for instance, whereas a representation of a territory refers to mere objects.
Second, there is exemplification. An item exemplifies a property if it at once instantiates the property and refers to it (Nguyen & Frigg 2017). In this sense, exemplification is possession plus reference. Instantiation (i.e., to be represented by an actual example) here is thus a necessary condition for exemplification since an item can exemplify a property only if it instantiates it (Elgin, 2009). Selectiveness is important here too, for the converse does not hold. Not every property that is instantiated is also exemplified. Exemplification is therefore selectiv—an exemplar typically instantiates a host of properties, but it exemplifies only few of them (Nguyen & Frigg, 2017). Moreover, selection is notably contextual. This is to say that which properties are exemplified, and which properties are merely instantiated, is not dictated by the object itself: nothing in the nature of things mark some features inherently more worthy of selection than others (Elgin, 2009). Notice here that exemplification warrants epistemic access (i.e., from an exemplar one can learn about the properties that it exemplifies. They instantiate the properties they exemplify in such a way that makes them salient). In the Kendrew model of myoglobin (see figure 8 below), for example, the myoglobin representation (i.e., the plasticine structure) exemplifies myoglobin properties such as the shape of a disc consisting of two layers of chain.
Before concluding discussion, this article finishes with a few words from Bas van. Fraassen (2010) summarising nicely the representation-as viewpoint:
Resemblance is certainly not the "be all and end all" of representation. Even when representation is not purely symbolic, distortion and unlikeness can play a crucial role in how the representing is achieved. When resemblance is in fact the vehicle of representation, the representation relation derives from selective resemblance and selective non-resemblance, and just what the relevant selections are must be highlighted in such a way as to convey their role. If the selection or the highlighting is indicated by signs placed in the artifact itself, these need to be meaningful in order to play their role, and so the task of identification is pushed back but reappears as essentially unchanged. (p. 11)
Conclusion
This article has introduced another significant part of the philosophy of science: scientific models. To this end, the article started by explaining the real importance of modelling in science. Modelling proves to be a crucial and central part of scientific practice, hence this article provided examples throughout to highlight just how influential and important models are both historically and today: Bohr’s model of the atom, the Kendrew model of myoglobin, the Billiard Ball model of a gas, and the Lorenz model of the atmosphere, to name but only a few examples. One can learn from such models, and due to growing philosophical interest, the number of different types of models recognised grows too. This article explored three fundamental types: models of phenomena, theory, and data. Each type raises interesting philosophical questions and problems, especially concerning a model’s ability to represent something. Representation indeed proves to be a particularly perplexing problem in the philosophy of science here since it cannot so easily be decided to explain in virtue of what a model represents a target system. This article therefore finished by investigating various accounts of scientific representation, namely similarity accounts, isomorphism accounts, and "representation-as" by Goodman and Elgin. Each account certainly provides interesting takes on representation provided by scientific models, and one must evaluate which really meets or overcomes the conditions of adequacy. Philosophical queries around representation resulting from scientific models is indeed just one area of the philosophy of science interested in modelling. This series shall therefore next consider another related area: models of explanation in science.
Bibliographical References
Allen, R.E. (trans.) (2006). Plato. The Republic. New Haven: Yale University Press.
Anissimov Last Modified Date: January 28, M. (2023, January 28). What is a strangelet? All the Science. Retrieved February 20, 2023, from https://www.allthescience.org/what-is-a-strangelet.htm
Bailer-Jones, D. M. & Coryn A. L. Bailer-Jones, (2002). “Modeling Data: Analogies in Neural Networks, Simulated Annealing and Genetic Algorithms”, in Magnani and Nersessian 2002: 147–165. Doi:10.1007/978-1-4615-0605-8_9
Bilstein, R. E. (1996). “From NACA to NASA”. NASA SP-4206, Stages to Saturn: A Technological History of the Apollo/Saturn Launch Vehicles. NASA. Pp. 32–33.
Bohr, N. (1913). I. On the constitution of atoms and molecules. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 26(151), 1-25.
Egger, A. E., & Carpi, A. (2008). Research Methods: Modeling. Vision learning, 1(8).
Elgin, C. Z. (2009). “Exemplification, Idealization, and Scientific Understanding.” In Fictions in Science. Philosophical Essays on Modeling and Idealization ed. Maricio Suárez, 77-90. New York and London: Routledge.
Frigg, R. (2002). Models and representation: Why structures are not enough. Measurement.
Frigg, R., & Hartmann, S. (2006). Models in science.
Frigg, R., & Nguyen, J. (2017). Models and representation. Springer handbook of model-based science, 49-102.
Giere, R. (2004). How Models are Used to Represent Reality. Philosophy of Science 71, 742-52.
Giere, R. (2010). An agent-based conception of models and scientific representation. Synthese 172, 269-81.
Goodman, N. (1976). Languages of Art. 2nd ed., Indianapolis and Cambridge: Hacket.
Lorenz, E. (1967). The nature and theory of the general circulation of the atmosphere. World meteorological organization, 161.
Madsen, J. (1994). Shell model versus liquid drop model for strangelets. Physical Review D, 50(5), 3328.
Neumann, J. V., Taub, A. W., & Taub, A. H. (1963). The Collected Works of John von Neumann: 6-Volume Set. Reader’s Digest Young Families.
Nguyen, J., & Frigg, R. (2017). Scientific Representation Is Representation-As.
Pal, A. K., Abouelmagd, E. I., & Kishor, R. (2021). Effect of Moon perturbation on the energy curves and equilibrium points in the Sun–Earth–Moon system. New Astronomy, 84, 101505.
Putnam, H. (1969). Is logic empirical?. In Boston Studies in the Philosophy of Science: Proceedings of the Boston Colloquium for the Philosophy of Science 1966/1968 (pp. 216-241). Springer Netherlands.
Sober, E. (2001). Venetian Sea Levels, British Bread Prices, and the Principle of the Common Cause. British Journal for the Philosophy of Science, 52(2).
Toon, A. (2012). Similarity and scientific representation. International Studies in the Philosophy of Science, 26(3), 241-257.
Treagust, D. F., Chittleborough, G., & Mamiala, T. L. (2002). Students’ understanding of the role of scientific models in learning science. International journal of science education, 24(4), 357-368.
Van Fraassen, B. C. (2010). Scientific representation: Paradoxes of perspective.
Zhu, C., & Yin, G. (2009). On competitive Lotka–Volterra model in random environments. Journal of Mathematical Analysis and Applications, 357(1), 154-170.
Comments