(1987) A Brief History of the Philosophical Foundations of

Multivariate Behavioral Research, 1987, 22, 267305
.A Brief History of the Philosophical Foundationcs of
Exploratory Factor Analysis
Stanley A. Mulaik
Georgia Institute of Techmollogy
Exploratory factor analysis derives its key ideas from many solurces. From the Greek
rationalists and atomists comes the idea that appearance is to be explained by something
not observed. From Aristotle comes the idea of induction and seeking common features
of things as explanations of them. From Francis Bacon comes tile idea of an automatic
algorithm for inductively discovering common causes. From Descartes come the ideas of
analysis and synthesis that underlie the emphasis on analysis of variables into
orthogonal or linearly independent factors and focus on reproducing (synthesizing) the
correlation matrix from the factors. From empiricist statisticians like Pearson and Yule
comes the idea of exploratory, descriptive statistics. Also from the empiricist heritage
comes the false expectation some have that factor analysis yields unique and nnambiguous knowledge without prior assumptions-the inductivist fallacy. This expectation
founders on the indeterminacy of factors, even after their loadings are defined by
rotation. Indeterminacy is unavoidable in the interpretation of common factors because
the process of interpretation is inductive and inductive inferences are not uniquely
determined by the data on which they are based. But from Kant we learn not to discard
inductive inferences but to treat them as hypotheses that must be tested against
additional data to establish their objectivity. And so the conclusions of exploratiory factor
analyses are never complete without a subsequent confirmatory analysis with aidditional
variables and new data.
Factor analysis is a mathematical model of relations between
variables. The model distinguishes between manifest (or measured)
variables and latent (or unmeasured) variak~les.Each manifest variable is regarded as a linear function of a comnion set of latent variables
(known as the common factors) and a latent variable unique to the
manifest variable (known as a unique factor). Common factors are
presumed to be mcorrelated with unique factors. Unique factors are
mutually uncorrelated. From these assuniptions one can show that
correlations between pairs of variables are due only to the common
factors they have in common. Users of exploratory factor analysis seek,
by an inductive procedure, to discover anti to identify the latent
common factor variables, given initially only the correlations among
the manifest variables. By inspecting the results of a factor analysis,
which show correlations of the manifest variables with the latent
common factor variables or which shlow coefficients indicating the
degree to which unit changes in the latent common factor variables
Correspondence in connection with this article should be sent to Stanley A. Mulaik,
Georgia Institute of Technology, School of k%ychology,Atlanta, GA 30332.
JULY 1987
267
lead to changes ih the manifest variables that depend on them, the
users believe they will find clues as to what the hidden latent variables
are. Thus what is manifest in appearance is to be explained by that
which does not appear and is known only indirectly.
Why has the method of factor analysis developed? Why do its users
believe the explanation for the correlations among measured variables
lies in some latent as yet unmeasured variables? Why do they believe
that a method of analysis will yield these unmeasured variables? Why
do they believe a mathematical model is appropriate? Obviously, these
are questions about the fundamental, philosophical assumptions that
the users of factor analysis make when they use that method.
To find answers for some of these questions, we shall first look
back to ancient Greece to see the origins for some of these assumptions,
and then work our way forward in history to the 16th and 17th
centuries to see how systematic methods for discovering knowledge
were a preoccupation of natural philosophers (scientists) like Bacon
and Descartes. Next we will see how the 18th century empiricists
elaborated on the method of analysis and synthesis developed by
Descartes, and how in the 19th Century empiricism gave rise to the
idea of exploratory, descriptive statistics, of which factor analysis is an
early 20th Century development. We will also see how Kant at the end
of the 18th Century raised questions about empiricism's lack of a
criterion for distinguishing subjective from objective knowledge in its
emphasis on merely describing associations among phenomenal impressions in experience. Kant believed that one could not obtain
objective knowledge without making a priori assumptions and, in the
context of these, formulating and testing conceptions about objects
against experience. But Kant's influence on the development of exploratory statistics in the 19th and early 20th Centuries was minimal, and
our concerns fos the objectivity of these mr?thodsthat are inspired by
Kant's focus on objectivity are still valid today. We wjll finally move to
the present to see how a fundamental problem af common factor
analysis, factor indeterminacy, reflects a fwdamental problem of
inductive methods in general.
In the account that follows, my main interest will be upon
stressing the biatory of ideas as these relate to factor analysis. The
reader should be warned, however, that at times the account will seem
to move far afield from factor analysis. But such digressions will occur
only to provide the reader with a fuller appreciation of the historical
backdrop within whioh these key ideas originate and develop. It is my
hope that the student of factor analysis will then gain from my account
268
MULTIVARIATE BEHAVIORAL RESEARCH
an appreciation for the ancient and often historically contingent
origins of many of the key ideas of factdr analysis and then be in a
better position to go on to consider a contemporary analysis of the
appropriateness of these ideas.
Contributions from Greek Philosophy. The idea that what is
manifest in appearance is to be explained in terms of that which is
hidden from view is an ancient one. The starliest philosophers of
science, the Milesians of Ionian Greece in the 6th Century B.C., sought
to explain the world by showing how its many forms in appearance are
but variations in degree, concentration,or configuration of some single
underlying substance (hence, monism) such as (or similar to) water or
slir. Following the Milesians, in the 5th Century B.C. Parmenidles took
the monistic idea to its logical extreme by a rational argument that
attempted to prove that the premises of the! monism1 of his time led to
the cronclusion that the world of appearance is but an illusion, that the
true underlying reality is a single substance which is eternal,
uncreated, unchanging, and indivisible, yet spherical in form (Fiuller &
McMurrin, 1955).
The absurdity of Parmenides' conclusion~sdrove others t a ~adopt
pluralist positions, while retaining the distinction between reality and
appearance. The followers of Pythagoras held that the underlying
reality was number (giving rise to matherna1;ical explanations of the
world). Atomists such as Democritus in the 5th Century B.C. held that
the tvorld is made up of a void occupied bly irlnlumerable tiny, invisible,
eternal, uncreated, unchanging, and indivisible entities known as
atoms. Change is brought about by the motions of the atoms that lead
them to form different configuratio~ns~
Causality is the result of' atoms
coming into contact with other atoms and raitlher imlparting motion to
them or hooking up with them or driving tlheim apart. Thus tht,)re can
be nlo causal action at a distance. Further~nore,the world of alppearance, of colors, of hot and cold, af sounds and smells, is but illusion, for
atoms have no color, nor heat nor cold, nor sounds nor smell. A,ppearance is but the activity of the atoms of lour senses that have been set
into motion by atoms, flowing from the sutrfiaces of things, colliding
with them. In this respect, the atomists (echoedthe theme of rationalists like Parmenides that what is observed is not real, that what is real
is not observed, that to know reality accuratelly one must rely heavily
on reason. Thus from the Greek raqonalists zend atomists we have the
idea that what is real is different from whi3t appears to us (Fuller &
McMurrin, 1955). Needless to say, this idea is .the origin of the
distinction made in factor analysis between observed and latent
JULY 1987
269
Mulaik
variables. Atomism, we shall see, has had an extensive influence on
subsequent scientific thought.
Following the atomists, Plato in the 4th Century B.C. attempted
to reconcile Pythagorean and atomistic ideas by regarding the ultimate reality of the world as consisting of various kinds of atoms (or
Forms), classifiable according to their geometric form. The explanation
of how various complex forms in our ideas of things were configured
from these elemental forms was to be demonstrated mathematically
(Toulmin & Goodfield, 1962).
Plato's student, Aristotle, perhaps the greatest philosopher of the
ancient world, rejected the emphasis of his predecessors on a reality
that is not given in appearance. He was furthermore suspicious of the
claims for complete and exhaustive knowledge of reality implicit in the
concept of atoms. Our approach to understanding reality must be more
open-ended and schematic. Reality, he held, consists of two aspects:
matter and form. Matter is that out of which a thing is made. The
actuality of the thing is given by its form, which represents the way in
which the matter of the thing has been configured. Matter thus is what
in reality is potential, the bearer of propel-ties, and becomes actualized
by taking on a given form. Change occurs by matter's undergoing
change in some of its properties. Continuity in change is maintained by
other properties (forms) remaining unchanged. A substance is identified by those properties that remain invariant through change. The
essence of a substance is the set of properties unique to the substance
that must remain unchanged if the substance is to retain its identity as
the substance undergoes changes into other forms. Furthermore,
reality is organized into a hierarchy of matter and form. That which on
one level is actualized into a given form may itself in turn be further
organized into more complex forms. Just as a brick may represent: a
particular actualization (form) of clay (matter), a wall in turn may be
a particular actualization, that is, configuration of a pile of bricks.
Clay, on the other hand, may be a particular form made from simpler
substances. (The hierarohical structure of higher order factors in
common factors analysis to some extent draws upon this hierarchical
view of nature envisioned by Aristotle).
Aristotle's psychology held that the senses capture the forms of
things but not their substance. Hence what is given to us in appearance is the formal aspect of reality. In taking this position, Aristotle
rejected the atomists' contention that apart from the qualities of
location and motion, those qualities, such as colors, sounds, and tastes,
given to us by the senses, are not real properties of things but rather
270
MULTIVARIATE BEHAVIORAL RESEARCH
the effectsof the motions of the atoms in things on our senses. Thus one
can understand how Aristotle believed he could rely upon the senses to
help him come to know the reality of the world. In this respect,
Aristotle may be regarded as the first major empiricist to rise in
opposition to rationalism and realism.
The motivations for many of Aristotle's positions arose from his
views as one of the world's first systematic biologists. One reason he
had rejected the atomist position was because it could not account for
the obvious stability of the forms of successive generations of living
organisms, or the regularity of their patterns of development. A world
consisting of invisible atoms that move aimlessly here and there and
which clump together into objects only because of the oppositions of
their motion could not conceivably produce the order or regulalrity that
he observed in nature. Matter must contain within itself, he held, the
tendency to develop into certain. "nat,ural,'" final forms (Toulmin &
Goodfield, 1962). Hence he believed that in trying to explain something, one must demonstrate that out of which it was made (material
cause), the essential features of its form (formal cause), the agent that
brought about that form (efficient cause), and the end (or final form)
toward which the object's form was directed (final cause).
His success as a biologist in developing taxonomies for the classification of organisms led Aristotle to believe tohatthe principles'he used
to discover classification schema would also work: in helping: him to
understand nature in general. The fiundamental principle on which the
discovery of classification schema depends is induction, that is, finding
fonns or properties that are common to1 a numlber of instances of
logically similar things. [There appears to be a certain circularity here
which infects Aristotle's as well as subsequent accounts of induction: if
we can only know universals by experience! through induction, then
how do we know that a number of instances are all of the sarne kind,
that is, are all instances of a universal, that is a prerequisite $or doing
induction?] Aristotle believed the persistence of sense impressions in
memory of numerous particulars pemil;ted the mind to build up
within itself an awareness of those featiure~sthat wwe common to them
of what is given in
and to provide the possibility for a ~ye~temaitizing
experience (cf. Bk. TI: Ch. 19, Posterior Anulytim, McKeon, 1941).
Notwithstanding his enhphasis an commlon. forms as the basis for
empirical knowledge, Aristotle recognized the specific and the (accidental. He distinguished between the essence of a thing, that, form without
which a thing wauld not be the kind of thing ithat it is, #andthe
accidental, unnecessary, or varying falxmls by which a thing of that
JULY 1987
271
Mulaik
kind may be observed. This distinction between the essential and the
accidental, between what is common and what is specific, laid a
foundation for the later development of the concepts of true values and
error in measurement and of common and specific factors in factor
analysis.
Because reality is knowable and arranged in hierarchical fashion,
Aristotle believed that one could discover additional knowledge by
deduction from empirically established first principles. (This was also
the idea of contemporary logical empiricism.) For this purpose he
invented the method of syllogistic logic, which is a method for reasoning about hierarchically organized systems. While Aristotle's scientific
method contains both inductive and deductive aspects, this was not
generally appreciated during the medieval period when most of his
works were unknown except for some short pieces on logic. When,
through the Arabs, the rest of his works were rediscovered by Europeans in the twelfth and thirteenth centuries, scientists were most
attracted to the deductive aspects of his methods of science. Some, such
as William of Ockham, Roger Bacon, and Richard Grosseteste, tried to
revive the inductive aspect of Aristotle's approach to science as well.
They even expanded on Aristotle's scientific method, requiring that,
following the induction of premises and the deduction of new knowledge, the scientist must obtain experimental confirmation of the
deduced consequences (Losee, 1980). But because through Thomism
Aristotle had become the official philosopher of the Catholic Church,
many of Aristotle's errors became enshrined as scientificcanon and the
greatest emphasis on Aristotle's scientific method was placed upon the
deductive aspects.
The Search for Self-Authenticating Methods for Discovering Incorrigible K~ourledge.From the 16th Century to the late 19th Century a
chief preoccupation of philosophers and theologians was the establishment of self-authenticating methods of obtaining incorrigible knowledge that cauld be used by anyone willing to discipline himself in their
use (Laudan, 1981). The Protestant Christian, for example, rejected
the expertise of a priestly class in matters of faith and morals, and
relied either upon the guidance of "the Inner Light" apprehended in
meditative intuition or upon a direct apprehension of the literal Word
of God in the Holy Scriptures, made generally available through the
new invention of printing (cf. Feyerabend, 1970). Rationalists, on the
other hand, ralied on the capacity of reason and intuition to apprehend,
directly and clearly, fundamental, incorrigible Tnxths. Empiricists, in
contrast, emphasized the immediate intuition of incorrigible sensory
272
MULTIVARIATE BEHAVIORAL RESEARCH
Truth as the basis for all sound knowledge. It was hoped that by a
systematic and methodological application of the means of apprehending Truth that incorrigible knowledge could be obtained in a selfauthenticated manner; that is, the method would guarantee the
validity of its product (Laudan).
The Contribution of Baconianism. Francis Bacon (1561-1626) in
the first quarter of the 17th Century looked upon the emphasis of his
Aristotelian contemporaries on syllogistic deduction as a fruitless
enterprise. The sciences of his time, he held, were replete with errors
as s~cientistafter scientist deduced "truth" after "truth" from unproved
premises. What was needed, he claimed, was a whole new approach to
acquiring knowledge, an approach that would abandon the Aristotelian method (as Bacon conceived it) of beginning with hypotheses and
deducing truths from them. And Bacon proposed in his Novum
Organum a "new" method of induction that would begin without
hypotheses or speculations, systematically interrogate nature, and
move to ever more general truths by means of an automatic procedure
or algorithm.
In its details, Bacon's method first involved, in the search for the
essential form underlying a phenomenal (manifest) nature of a given
kind, for example heat, preparing a table of instances all agr~eeingin
having the nature in question present, In looking over this table one
sought those natures that were cornmori among all the instances in the
table. Next a parallel table of otherwise similar instances vvas constructed in which the nature in question was absent. If a nature found
to be present among each of the instances in the first table (of
presences) was also found among the instances of the second itable (of
absences), then that nature was to be excluded as not a part of the
nature in question. Finally one constructed a table in which the nature
is question varied in degree. Then one lookled for other natures that
also varied in degree in unison with the variation in the nature in
question. Again one excluded those natures that did not sh~owthis
property.
C. D. Broad (1968) and G. EI. vsn Wright 1:1951) have called
Bacon's method "eliminative induction" arid have aredited Bacon with
the logical insight that a h i n g instances clo not provide confirming
evicience for inductive propositions, while negative instances do provide disconfirming evidence, anticipating lto some extent Popper's
views on scientific method, a view about Bacon put forth recently in
greater detail by Urbach (1982). But among early 19th Century
interpreters of Bacon, with the possible exception of Mill (1848/1891),
Mulaik
few fully understood the implications of eliminative induction for
scientific method, and this aspect of Bacon's method was not stressed.
One problem with this method is the presumption that the observations will necessarily all represent the same common property or its
absence, and that the number of underlying partially common properties that must be eliminated is finite. (Consider how similar assumptions also underlie Guttman's (1954, 1956) concept of an infinite
domain of manifest variables conforming to a determinate common
factor model because the manifest variables depend on a finite number
of common factors.)
I believe that the idea frequently attributed to Bacon of an
hypothesis-free algorithm or mechanical procedure for the discovery of
common properties in a given set of observations was the main
inspiration for the modern development of exploratory factor analysis
later on. For example, Cattell (1952, p. 21) has written, ". . . in the role
of an exploratory method, factor analysis has the peculiarity, among
scientific investigative tools . . . that it can be profitably used with
relatively little regard to prior formulation of a hypothesis." Notice
how the principle of eliminative induction on which Bacon's method
rests underlies procedures recommended for determining the interpretation of a factor. That is, one must look not only at what variables are
salient on a factor, but also at what variables do not load on the factor
(Cattell, 1978, p. 231). Bacon's method is also the inspiration of other
methods of automatic theory generation from data, such as Herbert
Simon's program "Bacon" (Bradshaw, Langley, & Simon, 1983) which
seeks to demonstrate that appropriately programmed computers can
show artificial intelligence by discovering well-known and even new
scientific theorems in empirical data given to them.
The ratiomEist method of analysis and synthesis. Bacon was not the
only philosopher in the 1'7thCentury to attemp* to formulate a universal
method for doing science. Perhaps, in this regard, the efforts at such a
formulation by the French Catholic philosopher, Rene Descartes
(1596-1650) (who lived most his adult life in Protestant Holland), were
much more successful and influential. Descartes was a leading figure in
the rise of the mechanistic protest against the organismic philosophy of
Aristotle, which grew out of a revival of atomist ideas by such philosophers as Bacon and Galileo. Nature, these mechanists argued, is not an
organism but a machine. Furthemre, echoing the ancient atomists,
these mechanists (especially Descartes and his follmvers) regarded the
senses as a faulty source of knowledge, for what the senses present, lay
themselves, is obscure and frequently illusory. Furthermore, partly
274
MULTIVARIATE BEHAVIORAL RESEARCH
Mulaik
echoing Bacon, Descartes held that the syllogistic logic practiced by the
Aristotelians in the 17th Century was also useless because decluctive
forms of logic are worthwhile only if one has true prenlisses, but tlhis the
Aristotelians could not guarantee with their haphazard reliance on
indudion from sensory observations, if they did so at all. And Deiscartes
believed that there was more to deductive reasoning than the mere use of
syllogisms. What was needed was a method that would guide the mind's
use wf reason to the apprehension of truths and their nlecessary interconnections. Descartes believed that only reason held within itself the
capacity to discover truths and their connections. Because every person
possesses reason, every person has the capacity to apprehend truth
directly. There is no need for a mediary, for an external authority to
provide the truth (echoing, perhaps, Protestant themes to which 11e had
been exposed in Holland).
Descartes was led to his method by his study of mathematics (he
was the inventor of Cartesian coordinate geometry). This study revealed to him the ancient method of anallysis and synthesis, first used
by ancient Greek geometers such as Pappas. According to this method,
to solve a problem, say in geometry, one first must assume what is to
be proven is true, then work backward, brealring the problem down
into simpler truths, until one arrives at fundamental, already-known
truths such as already-proven theorems, axioms, and/or postulates.
This backwards breakdown of the problem ie known as analysis or
resolution. Next, one must work forward, carefully retracing one's
steps, showing how the simpler truths may be combined to arrive
eventually at the complex truth that is to be proven. This forwardmoving operation is known as synthesis or composition (Schouls,
1980). Now, predecessors of Descartes had applied this method occasionally outside of mathematics, but Desaartes, because of his ~reflective and introspective self-consciousness,was the first to regard this as
a universal method for finding certain truths because it is (supposedly)
the way the mind operates to solve problems in general (Schouls).
Reason, Descartes held, has within itself t wal means of apprehending
truth: intuition and deducticnn. Inhdtion is th~eanalytic operat:ion of
reason. By intuition one breaks a problem down into simpler compa,nents
until one is then able to see clearly and clistincitly what these compo~nents
are. In other words, intuition is '"the apprehension which the mind, pure
and awntive, gives us so easily and so distinctly that we are thereby
freed from all doubt as to what it is that we are appreherlding'"(Descartes,
1958, p. 10). By seeing things as they are, clearly and distinctly, we see
that of which they are composed in such a way that they cannot be
JULY 1987
275
Mulaik
decomposed further or confused with other things. In contrast, deduction
is the synthetic or combining operation of reason "by which we understand all that is necessarily concluded from other certainly known data"
(Descartes, 1958, p. 11). It is important to realize that for Descartes
deduction was not the same as syllogistic reasoning (Schouls, 1980).
"Many things are known with certainty," Descartes held, '"though not by
themselves evident, but only as they are deduced from true and known
primary data by a continuous and uninterrupted movement of thought in
the perspicuous intuiting of the several items. This is how we are in a
position to know that the last link in a long chain is connected with its
&st link, even though we cannot include all the intermediate links, on
which the connection depends, in one and the same intuitive glance, and
instead have to survey them successively, and thereby to obtain assurance that each link, from the first to the last, connects with that which is
next to it" (Descartes, 1958, pp. 11-12). As Schouls puts it, deduction is
"intuition on the move" insofar as intuition is involved in the apprehension that the links in the chain of reasoning are properly formed and
insofar as one goes over and over the successive links in one's mind until
one is able to see clearly and distinctly the whole succession of links (if
only abstractly) and how it reproduces that which is to be understood.
According to Descartes, one other operation, besides intuition and
deduotion, is needed to ascertain truths: doubt. If imagination gives us
hypotheses about how to break things down or put them together,
doubt, systematically employed, urges us to use our reasoning faculties
of intuition and deduction until we clearly and distinctly grasp what is
truth inspite of doubt. Doubt performs the eliminative function of the
reasoning process, and in this respect Descartes' method has certain
affinities to Bacon's method of eliminative induction.
Descartes' method, based on analysis and synthesis, thus involved
following four rules, as stated in his Discourse on Method: (a) Accept no
idea that cannot be clearly and distinctly apprehended as true beyond
all possible doubt; (b) analyze complex questions into simple questions
so that by inhuition one may be able to apprehend clearly and
distinctly the elements on which they depend; (c) order one's thoughts
and ideas into a series of inferenoes from the simplest to the most
complex so that the complex depend in some (perhaps hypothetical)
manner on tha simpler ones before them; (dl go over the series of
inferehces critically to make certain that there are no gaps or fsflse
inferences in the series (Magill & McGreal, 1961).
In some respects, Desoartes' emphasis on seeking truths that
withstand all possible doubt soems wedded to a hopeless ideal. But
276
MULTIVARIATE BEHAVIORAL RESEARCH
Descartes believed he had discovered such indubitable truths by his
method. Most famous of all is his conclusion that because he thinks, he
exists. This cannot be doubted, for to doubt that one thinks leads one
to doubt that one doubts (a form of thinking), and this js selfcontradictory. Because one knows indubitably that helshe doubts as
helshe doubts, one knows helshe exists. Intuition also yields 1;o us a
clear and distinct apprehension of such apparently innate idleas as
substance, existence, causality, number, and time, for other ideas
depend on them, but they do not depend on any other ideals. But
Descartes also recognized that in practical and scientific matters one
cannot expect to achieve certain and indubitable knowledge anti so, in
these matters, one must forego the relentless use of doubt. He recommended in these cases that one formulate hypcbtheses, subject them to
some doubt by matching them with alternative hypotheses, and then
pick the most probable hypotheses and subje4f them to experilmental
test (Schouls, 1980).
At this point it is appropriate to make the obvious assertion that
from Descartes we get the idea of a universal lneth~odof analysis and
syntl~esis.Schouls (1980)points out how Descartes' introduction of this
method marks an intellectual watershed (over and beyond his contributions to reflective and subjectivistic apprloaches to phi1osoph:y). The
whole character of the way in whicb knowledge was typically ought,
organized, or presented was altered after IDescartes as philosophers,
scien.tists, theologians, and planners began to employ the ideas of
analysis and synthesis as regulative ideas in their ;activities.
We must also point out that the ideas of analysis and synthesis are
embodied in the idea of factor analysis. In factor analysis we seek to
breakdown observed variables and their intPerrelationsinto the effects
of linearly independent (distinct), latent, component variables. We
first analyze the observed relations to find the distinct components of
the factor model, and then, by the fundamr~ntaltheorem of factor
analysis, we reproduce or synth~sierethe observed relations from these
components. But to see how Bacon's idea of a.n automatic method of
forming correct inductions from eMperience and Descartes' idea of a
universal method based on analysis and synthesis become fustd in a
statistical method such as exploratory factor analysis, we must trace
ideas had on^ their
further the impad; these two m~~hod,ologi;ieal
philosophical successors, the British empiricists.
The empiricists' pursuit of anlplysis and synthesis. The earliest
British empiricists, Thomas Hobbes (:L588-1679) and John Locke
(1632-1704), shared much in common with De!scartes. Like Descartes,
JULY 1987
277
Mulaik
Hobbes and Locke were firmly committed to mechanism in explanations of nature. If in mechanistic explanations they differed from
Descartes it was with respect to mechanism's atomistic underpinnings:
Descartes could not accept the existence of atoms (for he believed
matter could be endlessly subdivided),whereas Hobbes and Locke, like
Bacon and Galileo before them, readily could. If Hobbes and Locke
differedin a major way with Descartes-and at the time it did not seem
to be such a major differenceit was in not seeing how reason alone
could bring us knowledge of the real. However much we may be misled
by the senses (as atomism averred), the senses provide our only
connection with the real world outside ourselves. Thus it was fundamentally important for the empiricists to analyze the process by which
we acquire knowledge of the world through the senses so that true
knowledge could be separated from error.
John Locke, who admitted the influence on his thought of Descartes'
writings, argued in 1690 in his An Essay Concerning Human Understanding that developmentallya person enters the world with a mind like
a blank slate. There are no innate ideas. Whatever ideas that person has
subsequentIy only enter his or her mind through experience, either from
sensation or reflection, As ideas first enter the mind, they are simple and
uncompounded, and, as such, can be neither created, decomposed, nor
destroyed by the mind-in other words, the simple ideas have properties
analogous to the properties of atoms and are thus elemental forms of
experience. By operating on the simple ideas, the mind is able to create
new ideas, which may be compounds of simple ideas (syntheses of them),
joined by association or by means of comparison, or they may be
abstractians (separatingfrom them all the others that accompany them).
To fully understand a complex idea, one must analyze it into its component simple ideas, which m e w tracing its origins in experience. In these
respects we see Locke's commitment to the use of analysis and synthqsis
in his method.
In the experience of objects, Locke argued, we may regard their
properties as of two kinds: primary qualities-extension, solidity,
figure, mobility, and number-which are accurate representations in
the mind of real qualities in external objects; secondary qualitiescolor, odor, hot, and cold-which are ways our sense organs respond
uniquely to the particular combinations of primary qualities in objects.
The secondary qualities exist only in the mind. By this distinction
between primary and secondary qualities, Locke hoped to show how we
can obtain true knowledge of an external world through the senses,
even though in some respects we can be misled by them.
278
MULTIVARIATE BEHAVIORAL RESEARCH
Although by a process of analysis Locke had been able to trace a
number of complex ideas to the simple ideas by which they originated
in experience, he discovered a number of ideas he could not do this
with. For example, he readily accepted the importance of the idea of
substance as a bearer of properties and powers for effecting change and
being changed. But he could identify no simple idea in experience from
which the idea of substance arises. All that one could identify in the
concept of a substance was a set of associated simple ideas. There was
no simple idea in experience corresponding to the substance itself.
After Locke, George Berkeley (1685-1753) and David Hume
(1711-1776) were to argue that by admitting that some ideas (e.g., the
secondary qualities) are not representative of reality, Locke had no
way he could be certain that any of our ideas are representative of
of the idea of subreality. They also regarded his reluctant ~~vowal
stance as inconsistent with his empiricist commitment to acclept as
intelligible only ideas whose origins are traceable to simple idleas in
experience. Berkeley and Hume thus rejected Locke's idea of an
external real world embodied in real substances as unnecessary for an
empirical theory of knowledge. All that is real, all that exists, is that
which appears in experience. This is the position ofphenomenalism. It
was a position that was not immediately plopular among empi:ricists,
only becoming popular among British empiricists in the latter half of
the 19th Century. Even so, Locke3srealism, that is, belief in a real
world behind sensory phenomena, retained adherents even into the
20th Century. In fact, Locke's view, which held that there exist real
but not directly observed entities behind our experiences, is an idea
implicit in the way many factor analysts regard the relation between
latent and manifest variables. Nevertheless, both Berkeley and Hume
remained unalterably committed to the idea of analyzing cctmplex
ideas into those simple impressions in experience from which they are
formed. In their thought the focus of analysis is quite distinct from that
of rationalists like Descartes: Descartes used analysis to identify
simple ideas on which other, complex and possibly unclear ideas
depended. Berkeley and Hume analyzed ima.ges in th.oughl into almost
point-like particulars, like the images of patches of color derived from
sensory experience. When Hurne analyzed causality all he waid he
could observe in the concept was that it .was derived from-experiences
of regular successions of eimilar kinds of phenomena, but he could
observe no particular phenomenal impression corresponding; to a
connection between the kinds of phenomena in question (Hume
1739/1740/1969,p. 213). Similarly, when Hum~econsidered the idea of
JULY 1987
279
Mulaik
a familiar, enduring object, say, a table, all he could observe, he said,
was a number of perceptual qualities configured to some degree in a
customary way derived from repeated experiences of this configuration, but no separate impression of the subject itself that supposedly is
the bearer of these phenomenal properties (Hume 1739/1740/1969, pp.
271-272). Even reflecting introspectively on the nature of his self as an
agent yielded, Hume said, no impression of the self apart from a
myriad of phenomenal impressions.
Hume's skepticism even undermined the assumption cherished by
many other empiricists that induction could form the basis of deriving
incorrigible knowledge from experience. Since induction involves generalizing from past experience, Hume argued that there is no necessary reason why customary successions and conjunctions observed in
the past should continue to be observed in the future. We can always
imagine things succeeding and being joined in quite different ways
from those to which we are accustomed.
Kant's fallibilist synthesis of rationalism and empiricism. Before
considering how exploratory statistics was nurtured in the empiricist
tradition, we must first digress to a consideration of the philosophy of
Kant, which laid the groundwork for modern criticisms of both
rationalism and empiricism. On the Continent during the 17th and
18th Centuries rationalist philosophers like Descartes, Spinoza, and
Leibniz had sought to derive incorrigible scientific knowledge about
the world by deduction from self-evident first principles.
But it was plain to Immanuel Kant at the University of
Konigsburg, Prussia, in the last half of the 18th Century, that the
rationalist enterprise had failed and was inferior to the system Newton
founded on experience. On the other hand, while regarding the
arguments of the empiricist philosophers like Locke, Berkeley, and
Hume as having great merit, he was dismayed by the skeptical
direction their thought had taken. He was very much unable to accept
their conclusion that there is nothing to our common-sense realist
belief in an objective world of things and objects, bearing properties
and acting causally upon one another, exempt habits of association, of
phenomena. On the contrary, he saw such a bslief in an objective world
amply supported in the physics of Newton. As a matter of fact, Kdnt
took the validity of Newton's physics so much for granted, that; it
became a problem for Kant how objective knowledge (not just knowledge in general), such as Newton's, is possible in the first place
(Brittan, 19781,Kant believed, however, that showing how objective
knowledge is passible could not depend on empirical or naturalistic
280
MULTIVARIATE BEHAVIORAL RESEARCH
Mulaik
(psychological) explanations of the way in which we know the world,
such as Hume had offered. To do so would get one into a fallacious
circular argument in which empirical methods are used to justify
empirical methods. Rather Kant believed the solution depended on
solvilng the problem of how we can know about objects in a most
general way independently of any scientific theory about objects
(which an empirical psychological theory of how one object knows
another object would involve). In other words, what can we know about
objects in general that is logically prior to the (empiricalknowledge we
gain of any particular object? This is the central problem of a
metaphysics of objects. But given the inability of the rationalists as
metaphysicians to converge on a metaphysical consensus and the
skepticism of the empiricists toward metaphysics, the question remained for Kant, "How is metaphysics possiblle?" (Dryer, 1966).
In his Critique of Pure Reason Kanit (1781/1787/1958) poirlts out
that no knowledge of things can be obtained withoul; thinking. Kn this
respect, formal logic presents conditions that thought must follow if it
is to make a valid inference. And logic arrives at these conditions
completely a priori. But logic only concerns the form of thought
irrespective af its content. Kant wondered whether there is not
something similar to formal logic, say, a "transcendental logic" which
in an a priori way ascertains conditions with which thinking must
comply to obtain knowledge of objects. If there is, then there will be a
way to verify judgments that are true a priori in gen~eralof any object,
and rnetaphysics will then be viable (Dryer 1966, pp. 75-76). In this
respect, Kant takes for granted that he arid everyone else is concerned
with obtaining knowledge of objects. But, i f one should ask whether
the world "really" consists of objects, Kant would reply that therre is no
way la answer that, if one means by that, a world as it is in itself,
independent of the way in whioh we experience and know the ,world.
(In talking this position, Kant rejected transcendenltal realism--that
we can know of the world as it is apart from experience-in favoi- of an
empirical realism that regards objects as .the form by which experience
is represented to us.) KnoMng the world in t e r n s of objects and things
is just the way or form in which we know it.
Kant believed that both the rationalists and empiricists were
~
misled by their special usesee of analysis arid synthesis i n t misunderstanding the problem of how one obtains knowledge of objects. The
rationalists analyzed knowledge into elementary, fundamental universal concepts, like number, causality, substance, quantity, totality,
and quality, excluding anything due to sensory experience because
JULY 1987
28 1
they did not trust the senses. They then believed that by means of
these pure concepts alone one could derive knowledge about the world
of objects by a process of synthesis or deduction. The empiricists, on the
other hand, analyzed knowledge into empirical particulars, excluding
anything (like the rationalists' universals) that did not appear within
experielice as a particular. Kant realized that if one analyzed experience into its raw particulars, one would get elements that were
completely unrelated logically to one another (as Hume contended of
the impressions). But such particular elements in themselves would
not constitute knowledge, for one cannot say anything about these
particulars without joining them by concepts. And the problem remains, then, how and from where are we able to synthesize the
particulars of experience to obtain knowledge? Empiricists like Hume
resorted to the synthetic operations of association, having the mind
join kinds of particulars that regularly appear conjointly or successively. But the empiricists refused to recognize explicitly that these
associative principles were bona fide elements of knowledge a priori.
They either were suspect or misleading, because they created the
illusions of necessary connections and enduring substances. (But
consider this analogy: Suppose we declare that knowledge of algebra is
about the real numbers. Then suppose we analyze algebraic concepts
into elements of the set of real numbers and toss away the axioms
concerning the synthesizing operations of identity, addition, and
multiplication on the grounds that "they are not what algebra is about,
because algebra is about real numbers." We would have no algebraic
knowledge of the real numbers. And if we retained the axioms, but
considered them a source of illusion, we would say we only have
illusory knowledge of algebra. This is similar to the way the empiricists like Hume argued against synthesizing concepts like causality
and substance and regarded knowledge based on these concepts as
illusory.)
Kant rejected the associative principles as the basis for his
synthetic operations for joining the particulars of experience into
complex concepts. The associative processes of the empiricists were too
passive in their operation. Kant regarded the mind's implementation
of synthetic as well as analytic operations as spontaneous and independent of the intuitions of the particulars given via the senses, which
were the passive, receptive organs for obtaining knowledge of objects.
In short, Kant synthesized the rationalist and empiricist positidns,
arguing that knowledge of objects begins with experience, but not all
knowledge of objects arises out of experience. On the one hand, the
282
MULTIVARIATE BEHAVIORAL RESEARCH
objects are given to us in sensuous intuition (direct and immediate
apprehension without meanings) via the passively receptive se~nsibility, which then orders them according to the a priori categories of space
and time; on the other hand, the understanding spontaneously provides concepts a priori for joining or synthesizing into thoughts the
diverse particulars given by intuition, for the synthesis of dlistinct
thoughts, and for the comparison of thoughts with sensory intuitions.
But Kant distinguished his position from the ratiornalists' and empiricists' positions by declaring that thoughts or concepts without, (intuitive) content are empty; intuitions without concepts are blind. In
othei- words, the senses provide the matter and the mind the farm for
thoughts (Kant 1781/1787/1958, B1, B75, .AEol-refers by a common
convention among philosophers to pages in first edition (A, 1781) and
revised edition (By1787) of Kant'a Critique!of Pure Reason). Neither
pure reason nor pure sensory intuition is alone sufficient for Iknowledge. Knowledge arises from the action of a spontaneous understanding operating with synthesizing a priori concepts on material provided
by the senses. (An example of a synthesizinlj a. priori concept, which is
central to the formation of concepts of objects is the subject-predicate
concept. By this concept qualities-predicat~es-are joined to a subject,
the latter serving as a mark or sign inthought to which the qu.alities
are linked.) The understanding is also capable of analyzing complex
concepts into constituent concepts and int~iitions.But standing over
the understanding and the sensibility is reuson, whiich provides; regulative principles with which to guide the understanding to a unified,
coherent, parsimonious, and objective conception of the objects of
thought and experience.
Because Kant believed the understanding operates spontaneously
rather than being passively driven by the contents of sensory intuition
(as tlie empiricists believed the impressions of sense drove thta associative processes to the formation of ideas), Kant had the problem of
establlishing how the mind is able to formulate and verify objective
knowledge about specific objeck, because the mind has no direct
contact with a world of things as they are in themselves with which
concepts of objects may be compared. Unfortunately, Kant's Critique of
Pure Reason does not deal directly with this topic, which is central to
scientific method, but rather with the metaphysical topic of how
necessarily and a priori we must in general think of objects and their
intenrelationships in order to obtain knowledge of them (Dryer, 1966,
p. 751. Nevertheless, by reading between the lines, we can see how he
would have to handle the sciendific question^.
JULY 1987
283
-
Kant, I believe, would say that the mind has complete freedom and
spontaneity to form concepts about objects from the material given to
it by the senses. The mind does this spontaneously in the understanding, which is Kant's faculty of the mind concerned with the employment of imagination and the analysis and synthesis of sensory intuitions and previously formed concepts to form objective concepts of
things in experience. But saying that the mind has this freedom means
(analytically) that there is more than one distinct way in which the
mind might arrive at a concept of an object that unites the sensory
intuitions in question. (Actually, Kant would restrict some of the
mind's freedom by saying that everyone's understanding is universally
limited to performing only 12 kinds of synthetic operations in the
formulation of knowledge; but he would assert that the number of
ways in which these operations may be concatenated in forming a
concept of an object is endless in the number.) These different conceptions of the object are initially only subjectively valid for the intuitians
in question, So, how does the mind judge the appropriateness and
objectivity of its various conceptions? It does this first by evaluating
the concepts according to regulative principles of reason. Under trZle
rule to assume nature as uniform, is the concept of the object consistent
with what is already known empirically about other similar or related
objects? Under the rule not to use ad hoc hypotheses to explain away
discrepanciesbetween experience and conception,is the conception of the
object adequate. to account for all the intuitions to which it currently
refers? Is the conception parsimonious? (Kant 1781/1787/1958
A643-A663, B671-691; A77hA782, B79fbB810).
Next, Kant argued (Kant 178311977, p. 43) that the concept of an
object, to be objective, must unite intuitions of the object obtained from
different points of view according to a rule generated a priori in the
understanding. Additi~nally,the objective concept of the object must
be independent of one's parficular perceptual state or other subjective
factors (Allison, 1983, p. 150) and thus be universally valid. Hence,
any objective judgment we farm about the object must always h ~ l d
good for us (with additional experience) as well as for everyone else
(whom Kant presumed would be equipped with the same cognitive and
conceptual capabilities). Msreover, any objectivejudgment concerning
the same objed must;agree, with d l other judgments about the object
(there can be no internal cantradictions in our collective conception of
the object in experience), Though Want d~eh;
not make it emJicit, $he
last requiremqnt implies that we should be able to predict (knowing
the concept of the objed and the rule that unites sensory intuitions to
284
MULTIVARIATE BEHAVIORAL RESEARCH
it) how the same object might appear if observed iin a novel manner,
say, from a new point of view. A test of the objectivity of the concept is
made with an observation from a new point of view to see if it is
actually as predicted. If it is not, we must reject the conception as
objedively inadequate. For example, consider the illusion of the "Ames
chair," a figure consisting of lines and planes that when viewed from
one point looks like a chair, but when viewed ffrom another point is not
a chair but rather a disconnectedjumble of lin~esand planes (Zinlbardo,
1985, p. 168). Although at first we think we see an objective!ly real
chair, we know subsequently it is not one, when .we view it from a
different point. Thus for Kant subsequent experience can be corrective
in eliminating nonobjective concepts.
It is important to realize that Kant's argument was directed in
part against the empiricists who stressed the search for regularities in
experience as the basis for inductive generalizations. The thrust of
Kant's argument was that any regularity men in a set of a1read.ygiven
phenomenal el~mentsrepresents a fkee imposition by the mind of an
arbitrary rule onto these elements to join then1together. But the rule's
objective validity is in no way established by showing that the rule is
consistent with these phenomena, for any rdumber of rules may be
found that are consistent with the phenomena.
For example (Hempel 1945/19165),give11a set of data pointr3 (x,,yi),
i = 1, . . ., n we may seek to infelr a funcltional relation that passes
through them and summarizes them. But the data points do not
determine a unique function that will do this unless one makes prior
assumptions about the form of the function to be coinsidered and fixes,
in some cases, some of its parameters, with the values of the coordinates of the data points then determining the values of the remaining
parameters so that the hnction will t b n pass through the poinits. But
more than one distinct function may be given a priori that in this way
passes through the initially-given goint~.The test of the objbctivity of
these functions then depends on haw they interpolate and extrapolate
to additional data points gathemd under the prior assumption t'hat the
points in question are all generated by a common fimctional relation.
Most of the functions that fit the initial points will not fit the additional
points. However, those that do fit the initial and,additional points may be
regarded, p~ovisionally,as "objmtive" cancc3ptions d the process that
generates the points (Mdaik, 1987). Nevertheless, if more than one
function does pass even this test, then regtlla.tive rules of reason will
impell us to seek M e r extensions of theae functions to additional data
until (in the ideal limit) all but one function is eliminated.
JULY 1987
285
But even Kant would have recognized that the ideal of a final
single function that passes through all points gathered to test its
objective adequacy is only an ideal. Reason impells us to regard
experience as a whole, as complete, and as unified under a homogeneous idea, he said. This leads us to think of final solutions, final
conceptions, and closed systems. But reason also impells us, Kant said,
by another regulative principle to regard experience in all its diversity
and detail, and this may lead us to see a diversity of processes where
others see but one, and we will then seek to fit not one but several
functions to different sets of points. But then a third regulative
principle will regulate our use of the other two principles: as we
proceed to establish diversity, do so in as gradual a way as possible, so
that we may establish a continuity of forms. In conceding the impassibility of final conceptions, and regarding the formulation of objective
concepts as guided by heuristic regulative principles, Kant broke with
the ideal of infallibilism and finalism in knowledge pursued by the
rationalists and empiricists before him. He then laid the groundwork
for the pragmatisms that arose at the end of the 19th Century,
although these were eclipsed for awhile by infallibilism's last attempts
to establish forms of infallible knowledge: logical positivism and
logical empiricism in the 20th Century.
The implications of KanC's position for science were to be worked
out during the 19th Century by Kantian philosophers of science like
William Whewell in England during the first half of that century and
by Continental scien1;ists like the Austrian Heinrich Hertz (Janik &
Toulmin, 1973) toward the end of the 19th Century.
William Whewell articulated the following rule for testing theories: "It is a test of true thecrries not only to account for but to predict
phenomena" (1984, p. 256). This principle was not new among those,
like Kant, trained in the rationalist tradition, its having been expounded by Descartes and Leibniz (Giere, 1983, p. 274). But Whewell's
rule represented an advance among British philosophers in the use of
theories and hypotheses conceived as free creations of thought, for
prior to Whewell's stating this rule it was commonly thought adequate
among those few who still used hypothetical reasoning to show that a
hypothesis is merely consistent with already observed facts. But this
was not thought adequate by most scientists of Whewell's time because
it was generally recognized that any number of hypotheses might be
formulated consistent witih a given set of facts and so hypothetical
reasoning was strongly condemned as not productive of unique and
certain knowledge (Laudan, 1981). Science was better served, it was
286
MULTIVARIATE BEHAVIORAL RESEARCH
commonly said, by making numerous observations and noting regularities and patterns and making cautious generalizations from these.
But Whewell's Kantian rule salvaged the w e of hyl~othesesby providing a criterion for eliminating nonobjective ones.
The Austrian physicist, Heinrich Hertz, argued that p~hysical
theories are not completely determined by experience but are freely
built models or constructions in thought from whi~chpossible experiences can be derived. The objective adequacy of these models is
protrisionally judged by their logical consistency, by the richness of
detail with which they represent relations in experience, by the
parsimony of the schema of representation, and by the models' continued coherence with experience. A crowning achievement of thns effort
to al~plyKantian concepts in the physical sciences was seen at the turn
of the 20th Century in Einstein's theory of relativity, which stressed
identifying invariant physical relationehips that are independent of
the reference frame and coordinate system in which they (are expressed.
The contemporary legacy of Kant is seen in the new pragrrlatisms
and empirical realisms (Aune, 1970; Hiibner, 1983; Rorty, 1982;
Tuornela, 1985) and in versions o f linguistic philosophy where Kant's
fixed and eternal a priori categories of the human understanding have
given way to the conventional grammars of languages and related
representational schemas that serve specific fomu of life (Hacker,
1972; Waismann, 1965; Wittgenstein, 1953).
Implications of Kant for Factor Analysis. The implicatilons for
factor analysis of Kant's emphasis on proving: objective conceptions in
experience were not always recognized by those who originally developed the model of factor analysis. This is because, as we shall show,
factor analysis developed mainly within a philosophical context that
remained generally ignorant of, if not hostile to, the philosophy of
Kant-empiricism, On the other hand, exploratory factor analysis does
exemplify Kant's conception of the imposition of a priori oonceptts onto
experience, which gives content to these concepts. Consider ithat in
presuming to analyze data with the exploratory factor analysis methods, we make certain a priori assumptions: W'e presume $hat the data
are to be united by linear functions between the observed variables
and ;a set of other variables known as the common factors. Notliing in
the raw data itself determines that we should unify the data in this
way. The latent common factor variables thus stand analogousHy with
respect to the various observed variables as Kixnt's cioncept of an object
stands with respect to diverse sensory intuitions-(according to rules
JULY 1987
287
Mulaik
(functions). And from Kant we should realize that common factors are
not so much "unobserved variables" knowable "in themselves" as they
are provisional objective conceptions that serve to unite the observations of numerous variables according to rules.
Because more than one formulation of the common factor model
for a set of data may equally fit the same data, the question of an
objective solution arises. Thurstone (1937)--drawing upon principles
absorbed by the philosophy of science of his day that echo some
Kantian themes-rejected principal axes and centroid solutions as
final solutions because, for him, they represented statistical artifacts
(subjective concepts, in Kant's terms) that would change with additions of different tests to the analyzed test battery. Thurstone then
relied on the concept of a domain of tests closed on a set of common
factors (Kant's a priori regulative principle of seeing things as a whole)
from which samples of tests are drawn under the assumptions that (a)
each test drawn depends on less than all the common factors of the
domain and ideally on only one common factor (parsimony) and (b) the
tests represent all the diversity of factors in the domain (Kant's
regulative principle of diversity and detail) to formulate his concept of
the simple structure solution (Thurstone, 1947). Objectivity was further to be established by overdetermining each common factor in the
analysis by having at least four tests representative of each factor. In
constructing his test batteries for analysis, Thurstone usually operated
with prior conceptions about the fadors of the domain, and so constructed or dose tests to represent single factors or simple combinations of these factors, thereby allowing his analysis to be a test of these
prior cgnceptions (hypotheses). Thurstone's simple structure concept
was then a way of identifying objectively invariant solutions for the
factors. As long as one's domain of tests contained tests representing
no more than simple combinations of a fixed but nominally large set of
factors of the domain, then the tests fall within hyperplanes, the
intersections of which define the factors of the domain. Sampling
different tests from within these same hyperplanes will not change the
outcome in terms of identifying these hyperplanes and thus the
corresponding objective factors of the domain.
Factor analysts who do not select tests with prior hypotheses
about the factjors have the problem of establishing the objectivity of
their interpre0ations of the factors at the end of a factor analysis. Can
they construct additional tests representative of their conceptions of
the factors and include &hemwith the original tests and obtain a
common factor solution that is consistent with the solution obtained
288
MULTIVARIATE BEHAVIORAL RESEARCH
..,
Mulaik
for the original set of tests? Will the original tests have the same
loadings on the respective factors in the new analysis? Mulaik and
McDonald (1978) showed how this would be a way to test the objectivity a~fan interpretation of factors (although they did not refer to1 this as
objectivity but rather called it validity).
The rise of exploratory statistics from empiricism. At the end of the
18th Century, Kant's rational-empirical pl~ilosophywas mainly influential on the Continent, especially in Germiany, while traditional
empiricism prevailed as the dominant philosophy of'science in Britain,
France, and the then new United States. Nevertheless, at this time
British empiricism was split into three factions: The first wa,; exemplified by the philosophies of David Hartley and James Mill. They
continued in the Lockean tradition which permitted the use of liypotheses about invisible objects, like atoms, in a real world behind
appearance. The second, a minority position among empiricists, reflected the phenomenalist position of Berkeley and Hume that regarded phenomena as the only reality. The third, which aggressively
came into vogue at the start of the 19th Century, was exemplified by
the Common Sense school of philasophjr of Tlhomas Reid (171Cl-1796)
and his disciple Dugald Stewart (1753-18283. Reid, like Kant, had
recoiled from the skeptical conclusions of Hunie, which seemed to Reid
contrary to common sense, Reid believed1 Himme's skeptical conclusions
rested on a fatal flaw: Hume's conception that the mind has, no objects,
but phenomenal impressions presented to it is nothing but a hypothesis
whose truth has never been demonstrated. In Reid's time arguments
from unproved hypotheses were g~nerallyregarded as unconvincing:
one can formulate any number of hypotheses that account for the facts.
Thus over the unlimited number of hypoth~sesthat all fit the same set
of fads, Reid and many of his contemporaries believed the prok~ability
that any given hypothesis is true would be essentially zero. And so the
use of hypotheses in empirical matters is l;o be avoided in favor of a
program of making careful observations and a~curiatedescript,ions of
the Cmts and inducing generalizations therefrom (Laudan 1981;Magill
& McGreal 1961). To support his campaign against hypotheses, Reid
drew upon the authorities of Francis Bacon im~dIsaac: Newton. Had not
Bacon attacked the Aristotelians for their specious use of hypotheses
and then invented a new inductive method to dis~coverCnxths from
observed facts? Had not Newton claimed that he feigned no hypotheses, that all his results were deiived irldtzetively from experimelnts? In
scientific method Reid's empirical method olf conducting science without hLypothesesbecame known as Baconism and was quite influential
JULY 1987
289
among rank-and-file scientists in the first half of the 19th century
(Daniels, 1968; Laudan, 1981; Yeo, 19851, with adherents of this view
even in the 20th Century.
It is in thi6 early 19th Century empiricist milieu, which denigrated hypotheses and stressed description of facts, that exploratory
and descriptive statistics developed. It is important to realize that in
this milieu there was a general lack of clear criteria for establishing
the objectivity of conceptions formulated from observations, for
Kantian views on objectivity were little known or understood. I shall
argue that a major failing of exploratory and descriptive statistics is its
general lack of criteria of objectivity which frequently renders its users
incapable of distinguishing mathematical artifact from an objective
representation. Because the use of exploratory factor analysis is
patterned after the use of other exploratory statistical methods, this
use often fails to consider questions of objectivity for factor analytic
results.
I have already written an account of the rise of exploratory
statistics out of British empiricism (Mulaik, 1980,1985),and so I shall
not repeat the details of that account here. Nevertheless, one of my key
arguments in that account is that exploratory statistics developed in
'part as a mathematical emulation and extension of the British and
French empiricists' conceptions of the associative processes of the
mind. For example, the Belgian Royal Astronomer turned social
statistician, Adolphe Quetelet (184611849 pp. 38-43), regarded the
taking of the mean as a fundamental mental operation inherent in
human nature for identifying what is representative of diverse objects
of the same kind. Francis Galton (1908/1973),who learned much of his
statistics from Quetelet via the reference cited, believed averaging was
the key to the way the mind forms generic concepts. To see what the
average face looked like, Galton studied compasite portrature (the
superimposition of photographic images of peoples' faces on one to the
other). Xn maqy respects Galton's views on this matter echoed
Aristotlq's views on how the induction of generic concepts takes place.
Subsequently, Pearson (1892/1911), well versed in the views of both
Quetelet and Galton, but influenced by the instrumentalist views of
the Auprian physicist Ernst Mach, saw the antirealist implicatio~$of
their views: Our concepts are not about an independent reality but
rather are "ideal limits" created by the mind by averaging expe~ieace.
And so, scientific laws are but summaries of average results. But
Pearsop sidestepped the question of the objectivity of these laws by
290
MULTIVARIATE BEHAVIORAL RESEARCH
Mulaik
regarding them as but fictions, justified by their usefu1ness.l F'earson
believed one could use statistical procedures such as curve fitting,
which represents fitting a curve to a locus of means, to summarize or
resume trends in data (Hogben, 1957; Pearson, 1911). The pervasiveness of the influence on subsequent multivariate statistics of Pearson's
idea of averaging to generate conceptual entities must not be underestimated. It turns up in Lord and Novick's (11978) concept of tlhe true
score, in modified form in the idea of principal components (invented
by Pearson (1901)), in Guttman's (1953) image concept, and most
recently, in Bartholemew's (1981, 1984, 1985) components that serve
as proxies for the latent common factors in the modell of factor analysis.
Galton (1878) invented the idea of the correlation coeificient,
which is but a mathematical emulation of the idea of association in
British empiricist psychology. Pearson (18921191I), inspired by the
empiricist works of Berkeley, Hume, and Mill, later took associaltion or
correlation to be the essence of causality!,claiming that earlier views of
causality were outmoded in the probabilistic world he envisaged.
Pearson then developed a maximum likelihood estimator of the correlation coefficient, which bears his name. Iaearson also took Galton's
idea of regression and turned it into a statistical method, Galton had
regarded regression as evidence supporting his view that inheritance
of characters is transmitted by numerous latent particles. But Karl
Pearson (1903), influenced by Mach's view that scientific theories are
but summaries and resumes of observakion~s,attributed no biological
significance to regression, regarding it ,asl ~ u at way of summ;zizing
relationships between variables. Subsequently, Pearson's demonstrator, Udney Yule (1909,1911)conflated ragmssictn and correlation with
the older method of least squares, and because of Yule's influence as a
textbook writer, the method of leasb squares entered the new statistics
as "regression.'But in the procesis, many of the caveats that 19th
Century physical scientists like William Whewell (184711966) placed
on the use of least squares to insure abjective results were forgotten as
regression became an exploratory or Bacorlian technique fiar the
discovery of causal (associative) rdaticms be tween variables. Yule
'Pearson, as well as Mach, did not grasp the point, that if you call something
"fictioii,"you must call som&h& else "noqfiction."But for Mach and Pearscn there
were no ways of synthesizing or summariziag experience that wexe not fictions, and so
the tern "fiction"lost, in their usage, any dietinctiveness. However, I think their intent
was to reject the notion that our concepts are about things as they are in iithemselves and
to say, as Kant would, that our conceptions of things are conshcted from our
experiences and fvrthermore are not final but can change with additional expernence. It
is a pity that they failed to grasp Kant's distinction between "subjective"m d "objective"
syntheses of experience.
JULY 1987
291
(1909) was also responsible for developing the now standard notation
for the study of partial and multiple correlation. Many subsequent
developments in exploratory statistics represent extensions, elaborations, and modifications of the simple concepts of the mean, correlation, and curve fitting, developed by the end of the 19th Century.
Artifact and Objectivity in Statistics. To understand how
empiricism's lack of criteria for objective relations led developers of
exploratory statistics to confuse what is artifactual with what is
objective, we will consider how some questionable interpretations
given to the use of means arose in the 19th Century and then show how
analogous interpretations later were attached to the factors of factor
analysis.
One important feature of taking means, argued William Augustus
Guy, professor of forensic medicine at Kings College, London in 2839,
is ". . . that the extreme quantitative differences existing between the
several things observed, compensate one another, and leave a mean
result which accurately expresses the value of the greater number of
the things so observed. . . ." (Guy, 1839, p. 32). The mean, which is a
quantity completely determined by the raw data, simply distills the
common value among a set of quantitative observations. On such
common or representative values, Guy and others believed, one could
build an empirical, inductive science.
To those, like the British empiricists, who were willing to let their
minds be passively driven by their sensory experiences in the pursuit
of the secrets of Nature, such accounts as Guy's of the virtues of the
new descriptive statistics were quite acceptable. One simply let the
data d~terminestatistical quantities via statistical technique applied
to the data. This was particularly so if you could not envisage
alterndtive forms of summary desdription for the same sense data. But
for those who actively wrestled with their sensory experiencesseeking to impose first this concept and then another onto thqse
experiences, until he or she found one that was not only coherent with
past expe;riencesbut with new ones as well-using descriptive statistics, after the fashion of Guy and others, was just not adequate. Thus
we find in the la$t half of the 19th Century Claude Bernard, the
founder of madern medical physiology, arguing that physiology should
be pursued as a theoretically driven experimental science and ridicwling the mindless appiication of descriptive statistics in medicine and
physiologSs. He cited this particularly absurd use of the mean:
If we collect a man's wine during twentyfour hours and mix all this urine to analyze
the average, we get an analysis of a urine which simply does not exist; for urine,
292
MULTIVARIATE BEHAVIORAL RESEARCH
Mulaik
when fasting, is different from urine during digestion. A startling instance of this
kind was invented by a physiologist who took urine firom a railroad station urinal
where people of all nations passed, and who believed he could thus present an
analysis of average European urine! (Bernard 1865119271195'7,pp. 134-135).
Bernard's claim that the daily average urine is a urine that simply
does not exist raises the question of when a mean can be said to
represent something that does exist. Herinard argued that science
should seek objective rather than subjective truths. So, when does the
mean (or any other statistical quantity) represent something objective
as opposed to something "subjective" or artifactual? This was not an
issue that many 19th Century statisticians operating in the British
and l?rench empiricist traditions were prepared to solve, for they had
no clear concept of objectivity.
Queblet (184611849, pp. 42-43) tried to identify the conditions
under which it was appropriate to regard a1 mean value as representative of a real value in nature and nat an artifact. He believed that
only when one takes the average of a normally (distributed set of
observations, does the mean represent a real common value, common
to ea~chobservation, with deviations from the mean in individual
observations the result of adding unsystematic, ex1;raneous errors of
nature or observation to this common value. He wals led to this view
from his acquaintance in astronomical th.eory (taught him lby the
French mathematician and probabilist,, Laplace) with the normal
distribution, which in his day was called "the distribution of error." In
astronomical measurements, errors wercc presumed added to the true
value to be estimated and were normally distribuhd with a mlean of
zero. One averaged numerous numerical observations taken under
certain uniform conditions to cancel as much as posisible the efl'ects of
errors in the observations to get a value more representative of a
"true" value comrnon to all the ~crbsexvatiuns(Laplace 1796/1951:1. Thus
when Quetelet discovered a data set consisting of the chest girths of
Scottish soldiers measured to outfit them in uniforms for the war
against Napoleon, he was astonished when a pllot of ithe distribution of
these scores was essentially normal. He was lied to believe, them, that
the mean of these scores representerd the value af the chest girth of the
average man, a human prototype, common within each individual
man, to which extranrcrous errors d nature are added to produce the
individual's measurements (see also Porter, 1985).
But what Qustelet did not consider was that, given a noirmally
distributed set of scores, they may have been generated in a way other
than by adding a set of normally di~tributederrors with a mean of zero
to some value common to all the obaemations. By the central limit
JULY 1987
293
theorem we know that many random variables can be generated
having distributions that closely approximate the normal distribution
by adding together numerous random variables and dividing the sum
by the number of variables entering the total. And these numerous
random variables may all represent objectively real processes that
combine to produce each individually observed value. For example, a
physical characteristic measured may be the result of the combined
quantitative effects of numerous genes or their respective alleles, all of
which are real causes of the characteristic measured. In other words,
the variation in individual values may reflect intrinsic and not
extraneous variation. There may be no or only negligible extraneous
influences on the values observed. Thus the fact that a distribution of
scores conforms to the normal distribution cannot be used as a
sufficient indicator that the distribution's mean has an objective
interpretation as a constant value common to each observation. We
shall call the reification or objectivization of a mean value, when it is
a statistical artifact having no objective referent, the average man
fallacy, after Quetelet's regarding the mean measurement as representative of the value of a real average man to be found in each
individual man. This fallacy generalizes to all cases where a statistical
(or mathematical) artifact is uncritically treated as if it were representative of a real attribute of some entity.
It is the experimentalists who came to identify the conditions
under which it is appropriate to regard the mean of some scores in
some experimental condition as representative of an objective effect
common to them, that is, the value of an attribute common to them.
First, all of the experimental units under an experimental treatment
condition must be exposed to the same causal influence. Second, a
given experimental treatment should (plausibly) produce a unique
effect; variation in observed values must be due exclusively to the
effects of extraneous causal variables that vary independently of
variation in the independent variable of the experiment and combiee
additively with the effecC of the cause in question, In other words, thqre
must not be some equally plausible alternative hypothesis under
consideration in the scientific community that would account for tbe
same outcomes, for example, the value of the fixed causal condition
produces multiple effects by interacting differently with each of the
values of several other causal variables that in turn combine to
produce the effect observed in the individual experimental unit.
Variation in this second case is then intrinsic and not extrinsic.
If there are competing alternative hypotheses about the mean,
294
MULTIVARIATE BEHAVIORAL RESEARCH
Mulaik
then one must take steps to rule out all but on~eof them. For example,
to rule out the second hypothesis wherein variation is intrinsic aind not
extrinsic, one might consider possible intrinsic causal variables, measure the experimental units on these, and then group the experimental
units into groups homogeneous in values on these potentially intrinsic
causal variables and study the effect of the experime~italtreatment on
means and variances within these groups. If the means of these
homogeneous subgroups differ significantly, then this suggests that
variation between individual observations is intrinsic and not just
extraneous, and the mean is an artifact if regarded ;ss an attribute of
an individual observation and not simply of the group of scores as a
whole.
In general, tests for the objectivity of a statistic depeind on
establishing its invariance when observed under conditions presumed
to be critical in determining a unique value! for the statistiic but
varying in other irrelevant ways. But testing hyotheses and pre,sumptions was foreign to Baconian, 19th Century science which stiressed
description and cautious inductive inferences from data.
At this point we may seem to havce drifted fscr afield froim the
subject of factar analysis. However, we who usle factor analysis are all
familiar with how Spearman (1904)put forth the hypothesis of a single
common factor among measures ofintellecltu,alability. Thomson ((1916,
1919) challenged Spearman's contention th;at his data confirmed the
objective reality of a single common factor (amongmeasures of intellectual ability. Thomson's argument was analogous to the argument
used by Bernard and others to expose the average man fallacy in many
researchers' uncritical interpretation of means: the way nature 4,renerates the data on which the mean is basled ]may not conform to one's
concept of how the data was generated. Thomson similarly showed how
one could generate variables that mrould have the pattern of intercorrelations consistent with a single common factor and yet not be
generated by adding a common variable to each of several uncorrelated
variables respectively. Thomson argued that these variables could be
generated by, first, extracting, in a particular way with replacement
from a very large set of uncorrelated variables, various par*tially
overlapping samples of variables, without any subset of variables
being common to these samples, and then, second, adding the variables
in these respective samples together to produce the manifest variables.
Spearman was vulnerable to this criticism because he had not committed himself as to what he expected to be common among the
variables when he hypothesized they had a single common factor:
JULY 1987
295
Mulaik
Spearman offered no manifest sign of something common to the
variables (as we analogously offer a manifest sign of something
common among scores in an experiment when we link them all to a
common experimental treatment) to lend plausibility to his hypothesis. (It would have taken an analysis like Guttman (1965) provided
with his faceted definition of intelligence to come up with a clear
identifier of Spearman's general ability: analytic or rule-inferring
ability.) Perhaps the moral is clear: without a prior conception of what
we model in Nature with a particular statistical model, we fail to take
the steps necessary to guarantee the plausibility of the interpretation
we give the results against criticisms of artifactuality. But one is
hardly likely to take such steps if one conducts science according to the
rules of 19th Century British empiricism that eschew substantive
hypotheses and active direction of the observational process (because
that would "prejudice" the results).
The common factor model was first formulated at the turn of the
20th Century in Britain in the statistical milieu at the University of
London which was dominated by the last of the great 19th Century
British empiricists, Karl Pearson. Pearson rarely formulated substantive hypotheses and then tested them. He was simply content to
describe, discover, and summarize. It is understandable then that
British scientists, who took up the correlational statistics invented by
Pearson and his students and who elaborated them into factor analysis, would proceed in the same exploratory way. But it was a way
constantly vulnerable to the criticism of artifactuality.
Factor ln&terminacy. The next development in exploratory facitor
analysis that has philosophical implications is factor indeterminqcy,
which was first eqosed by E. B. Wilsan (1928) in a review of
Spearman's The Abilities of Man. Wilson's contention was that there
could be no unique interpretation or definition of g given within
Spearman's methodology. This is becavse the g variable is pot
uniquely determined by the observed variables. All that we know from
a factor analysis are the correlations of the observed variables with $he
g variable, and, mathematically, it is possible that a number of distifict
variables might be found that have the same pattern of correlatibns
with the observed variables as does the g factor. So, if interpreting the
g factor implies stating which variable in Nature it corresponds to, we
cannot do so uniquely from just the information given by a faqtor
analysis.
Staiger and Schonemann (1978) and Steiger (1979)have written a
thorough history of the factor indeterminacy issue. They show how the
296
MULTIVARIATE BEHAVIORAL RESEARCH
Mulaik
problem of factor indeterminacy was wrestled with by British psychologists up until the middle 19307s,but the problem was not fully
resolved, partly because it was not fully understood. Then, as ,subsequent developments of factor analytic methodology shifted i;o the
United States under Thurstone and his students, this topilc was
forgotten. It was through the efforts of Guttman (1955) that this topic
was I-esurrected. Although focusing on the indeterminacy of factor
scores in his article, Guttman (1955)also pointed out the implication of
factor indeterminacy for the interpretation of factorzr. OLWinterpretations are based upon the factor loadings produced by the factor
analysis. But these are not sufficient to determine uniquely the
variables corresponding to the factors. With respect to each common
factor, we can construct more than one distinct variable that relates to
the manifest variables in the same way as the fatstor of the factor
analysis, as revealed by its correlations with the manifest variables. In
fact, since we are able to compute the multiple correlation p for
predicting the common factor from the manifest variables, wre can
determine that two distinct constructions for the same common factor
can be as minimally correlated as 2p2-1. For examplLe,if the multiple
correlation for predicting a factor is ,707, then two distinct constructions of the factor could be as minimally correlated as zero; and if the
squared multiple correlation is less than ,707 (a not infrequent
finding), they could be negatively correlated. TIDGuttman the fact that
two distinct but legitimate interpretation~sof a factor could correspond
to variables that were partially opposed to one another was albsurd.
Something, he believed, was fundamentally wrong with the model of
common factor analysis. He urged factor analysts to look at models
that were free of indeterminacy, such as his imag~eanalysis model
(Guttman 1953). Although his image analysis model received clonsiderable interest, his comments on the implicatiolns of indetermina~cyfor
the interpretations of common factors went largely ignored, perhaps
because they were embedded in a paper I N ~ O Smathematics
~
were
difficult for many to follow. Nearly n decade! aad a half passed lbefore
another attempt was made to resurrect the factor indeterminacy issue.
A successful resurrection of the factor indeternninacy issue was
carried out by Schonemann (1971) ;and his students (SchBnemann &
Wang, 1972; Steiger & Sohonemann, 1978; and Steiger, 1979). Like
Guttman, Schonemann and his students have regarded the factor
indeterminacy problem as a serious problem for the common factor
model and have recommended (Schonernann & Steiger, 197611that
researchers use some determinate variant of the common factor nnodel,
JULY 1987
297
a component analysis model, instead of common factor analysis. This
approach to resolving the indeterminacy issue has also been echoed
recently by Bartholomew (1981, 1984, 19851, although on somewhat
different grounds.
In contrast, Mulaik and McDonald (1978) and McDonald and
Mulaik (1979), while accepting the indeterminacy of factor interpretation in exploratory common factor analysis, have not regarded this
as a fatal flaw requiring the abandonment of the common factor model
in favor of component analysis models. I should like to amplify on their
position here: Factor indeterminacy is just a special case of a more
general form of indeterminacy commonly encountered in science,
known to philosophers of science as the empirical underdetermination
of theory-Data by themselves are never sufficient to determine
uniquely theories for generalizing inductively beyond the data (Garrison, 1986). Inductive methods of generating theory always have an
indeterminate element in them. Those who fail to recognize the
indeterminacy of induction frequently do so because they are victims of
the "inductivist fallacy" (Chomsky & Fodor, 1980),the belief that one
can make inductive inferences uniquely and unambiguously from data
without making prior assumptions. This was the fallacy of John Stuart
Mill's excessive claims for his inductive, empirical methods for discovering causes (Copi, 1968). And it is the fallacy behind the way many
users of exploratory common factor analysis have expected the method
to provide them with unambiguous results.
In actuality, given any set of data, there is an unlimited number
of possible inferences we might form as to how the data is to be
generalized. For example, we have already considered the indeterminacy of fitting a function to a discrete set of data points in our
discussion of Kant's concept of objectivity. Recall that whatever choice
we make for the function that supposedly generates a set of data points
will be arbitrary as far as the data itself is concerned (Hempel,
194511965).We can commit ourselves to certain assumptions that lead
us to pick one of infinitely many generalizing functions, but we will
never know whether these assumptions are not simply artifactual
unless we find a way to put them to the test with additional data. A test
of the objectivity of the function chosen will then be the extent to which
it extrapolates and interpolates to additional data points with new
values for x, under the presumption that the additional data are
generatd according to the same rule. The point is not that induction
is a flawed method that must be abandoned, but rather that whatever
inferences we do form from data with inductive methods (and the prior
298
MULTIVARIATE BEHAVIORAL RESEARCH
assumptions required for their use) must be evaluated with addlitional
data. In other words, if induction is to have any kind of empirical
merit, it must be seen as a hypothesis-generating method and not as a
method that produces unambiguous, int:srrigible results.
With these comments on the empirical mderdetennination of theory
in mind, let us now consider the drawback of abandoning the aommon
factor model in favor of component analysis models because their factors
are determinate. Component factors are artifacts alnd do not strictly
represent inductive generalizationsbeyond the data. If we do use component factors inductively it is by treating them as approximations of, say,
common factors. We have already seen in our discussion of Kant's
distinction between the subjective and the objective how Thiurstone
(1937) rejected principal axes and centroid factars bwause he suspected
they were arbitrary and artifactual. Thustone was supported iin these
conc~rnsfor the artifactual status of principal components and centroid
factors by Wilson and Worcester (1939) wlho asked, "Why should there be
any particular significance psychologically to that vector of thle mind
which has the property that the sum of sqpares of the projections of a set
of unit vectors (tests) along it be ma~imurn?"(p. 136). We might
generalize this criticism by asking "Why ishould there be any pai-ticular
psychological significance to any specific kixnear comlbination of the observed variables that optimizes . . .?"--the question to be completed by
stating some a priori mathematical function of the observed variables.
Perhaps in rebuttal of this criticism one urould argue that a. latent
variable has no "real" existence, and so, vcre are free to define latent
variables in any convenient way that a1l.ows us to explain the covariation between, and the $cores on, the observ~ed variabl~es (cf.
Bartholomew 1981, p. 95).But do detenninate components provide an
adequate explanation? The strategy follovved by using determinate
component factors would generate a diaerent set of components for
each different set of variables analyzed. This is because component
factors are not invariant under varying soledions of variables analyzed except in very stringent and unrealistic cases. If w~ augmented
the original set of variables with additional "real-warld" variables and
reanalyzed with the same method, we would not get the same component factors. Are explanatory constmcb then1 ad hoe, only good for a
specific set of variables? If so, then one has violated the principle of
parsimony in explanation, for one needs a dliairent set of explanatory
constructs for each distinct set of variab~les,
Because component factors are, b h e d as linear combinations of
the observed variables, they necessarily occupy the space spanned by
JULY 1987
299
the observed variables. But why should our explanatory constructs
necessarily occupy the space of some specific set of variables whose
covariances are to be explained? Most scientific concepts carry surplus
meanings beyond those invoked in explaining certain properties of
objects in specific situations. But the strict meaning of component
factors is exhausted by their relationship to the observed variables
that define them. It is strange that an explanatory construct is defined
by what it is to explain, as if the explanation is explained by what it
explains. This brings us to the fact that component factors are not
consistent with the grammar of causal concepts. The components are
defined as specific linear combinations of the effect variables (the
components are supposed to explain), but causes generally are not
strictly determinate from effects but rather must be distinct from what
they explain (cf. Mulaik, 1987. Thus component factors could not be
regarded as causes of the observed variables.
However, as Mulaik and McDonald (1978) pointed out, common
factor models show invariance of loadings for original variables when
these variables are included with additional variables generated by a
conformable common factor model. Hence, one can use this fact as a
basis for a test of one's interpretation of the common factors based on
an analysis of the original set of variables. If the original common
factor rnodel defined on the original variables is not conformable with
the common factor model for an augmented set of variables that
includes the original variables and additional variables, presumed
generated by the same common factors, then one's interpretation of
these factors is faulty, On the other hand, passing such a test is no
guarantee that the original set and the additional set of variables were
generated by the same factors. The test only determines whether one's
interpretation of the factors, manifested in the rule by which the
additional variables are generated, is a viable interpretation. There is
no way to eliminate the indeterminacy of the factors for the original
set of variables by adding variablas to the original set of variables.
Although the degree of indeterminacy for the augmented set of
variables is reduced &am that of the original set of variables, it is
nevertheless relative only to the augmented set of variables.
Nothing I have said so far precludes the possibility (established in
Mulaik & McDonald, 1978) that two researchers may each form
different interpretations or hypotheses about the common factors for
the original variables and procead to add variables to the original set
of variables according ta these hypothesas and in each of their
respective cases discover that thsir augmented sets of variables con300
MULTIVARIATE BEHAVIORAL RESEARCH
tinue to obey a common factor model conformable with the common
factor model of the original variables. And yet, were they to get
together and pool all their variables together in one grand factor
analysis, they would discover that jointly their two sets of'variables do
not conform to the original common factor model. This would be the
sign that they have defined the factors differently, thereby emb~edding
the original variables in different factor domains.
At this point we should see that the indeterminacy of an inductive
procedure like exploratory common factor analysis confronts us with a
decision each time we use the procedure: we must decide what the
results of the procedure shall mean for us beyond the data at h,and. In
other words, inductive procedures do not automatically give us meanings. It is we who create meanings for things in deciding how they are
to be used. Thus we should see the folly of supposing that exploratory
factor analysis will teach us what intelligence is, or what personality
is. At some point, we have to define for ourselves, in ways that seem
interesting and promising for us, what we shall mean by these terms
and do so in ways that others will find interesting and objectively
determinable. But we don't always need the results of an exploratory
factor analysis to do this. There me many ways to arrive at a
definition. By this I am not advocating the old idea of operi3tional
defin.itions, because operational definitions are ordinarily ad hoe and
not very interesting or useful beyond the situations in which they are
given. It takes quite some skill to select a concept and to define it
objectively so that it is capable of inteaatsingand synthesizing in a
useful way numerous experiences beyond just those in the initial
defining situation.
Thus if exploratory factor analysis cannot tell us what something
is, perhaps we can consider other forms of factor a~~alysis
that, begin
with well-defined concepts and then seek to study the relations of these
concepts with others in a way where we can decide objectively that
they apply. And one way I know how to do this atnd still do factor
analysis is with confirmatory factor analysis. But confirmatory factor
analysis is only one of many ways, and perhaps a relatively restricted
way at that, of studying relations between co~icepts.
Conclusion
In closing, we see how the idea af common factor analysis draws
upon a rich philosophical heritage. From the idea of the Greek
JULY 1987
301
atomists that appearance is to be explained by something not observed,
from the emphasis on analysis and synthesis of Descartes, from the
ideal of an automatic algorithm for discovering knowledge of Francis
Bacon, and from the idea of correlational exploratory statistics as an
inductive method developed by empiricist statisticians like Karl
Pearson and Udney Yule, exploratory common factor analysis derived
its fundamental ideas. From this same heritage users of exploratory
common factor analysis also derived false expectations, that the
method could yield unique and unambiguous knowledge about the
fundamental causes of a domain of variables without prior assumptions-the inductivist fallacy. This expectation founders on the indeterminacy of common factors. But indeterminacy is not fatal for the
model of common factor analysis nor for exploratory common factor
analysis, if the method is seen with less amibitious expectations as a
hypothesis-generating method, providing information for the researcher to use in formulating hypotheses. But we see from Kant that
the freedom to endlessly generate novel hypotheses demands our
finding a way of using experience to sort out those hypotheses that are
subjective and artifactual constructions from those that have an
objective import going beyond the specific set of data stimulating the
hypothesis. This can be done only by testing hypotheses with additional data. Hence, confirmatory common factor analysis is a logical
sequel to exploratory common factor analysis.
References
Allison, H. E. (1983).Kant's transcendental idealism. New Haven, CT: Yale University
Press.
Aune, B. (1970).Rationalism, empiricism, andpragmatism. New York: Random House.
Bartholomew, D. J . (1981). Posterior analysis of the factor model. British Journal of
Mathematical and Statistical Psychology, 34, 93-99.
Bartholomew, D. J. (1984).The foundations of factor analysis. Biometrika 71, 221-232.
Bartholomew, D. J . (1985).Foundations of factor analysis: some practical implications.
British Journal of Mathematical and Statistical Psychology, 38, 1-10.
Bernard, C. (1957).[An introduction to the study of experimental medicine.] (H. C. Green,
translator). New York: Dover Publications. (First published i n France i n 1865 and
first issued i n an English translation in 1927).
Bradshaw, G. F., Langley, P. W., & Simon, H. A. (1983).Studying scientific discovery b y
computer simulation. Science, 222, 971-975.
Brittan, G. G. (1984). Kandt, closure, and causality. In Harper, W . L. & Meerbote, R.
(Eds), Kant on causality, freedom, and objectivity. Minneapolis: University of
Minnesota Press.
Broad, C. D. (1968).Induction, probability and causation. Dordrecht: D. Reidel.
Cattell, R. B. (1952).Factor Analysis. New York: Harper & Bros.
Cattell, R. B. (1978). The scientific use of factor analysis in behavioral and life sciences.
New York: Plenum Press.
Chomsky, N. & Fodor, J . (1980).The inductivist fallacy. In Piattelli-Palmarini, M. (Ed.),
Language and learning. The debate between Jean Piaget and Noam Chomsky.
302
MULTIVARIATE BEHAVIORAL RESEARCH
Mulaik
Cambridge, Mass.: Harvard University Press.
Copi, I. M. (1968). Introduction to logic. 3rd Edition. London: Macmillan.
Daniels, G. H. (1968). American science in the age of Jackson. New York: Columbia
University Press.
Descartes, R. (1958). Rules for the guidance of our native powers. In .N. K. Smith (Ed.
and Trans.), Descartes Philosophical writings. New '170rk: The Modern Library.
Drver. D. P. (1966). Kant's solution for verification in metaphysics.
London: Geolrge
- - Allen
" & ' ~ n w i nLtd.
Feyerabend, P. K. (1970). Classical empiricism. In Butts, R. E. and Davis, J. 'W. (Eds.),
'Phe methodological heritage of Newton. Toronto: The University of Toronlto Press.
Fuller, B. A. G., & McMurrin, S. M. (1955). A history ofphilosophy (3rd ed.). New York:
Holt, Rinehart and Winston.
Galton, F. (1888). Correlations and their measurement, chiefly from anthn~pometric
data. Proceedings Royal Society of London, 45, 135-1 45.
Galton, F. (1973). Inquiries into human faculty and its development. New York: AMS
IJress, Inc. (Reprint of 1908 ed. of materials published 1883).
Garrison, J. W. (1986). Some principles of postpositivistic philosophy of scienoe. Educational Researcher, 15, 12-18.
Giere, R. N. (1983). Testing theoretical hypotheses. In Earman, J. (ed)., 2IGnnesota
studies in thephilosophy of science, Vol. X. (pp. 26%298). Mi~mneapolis:University of
Minnesota Press.
Guttman, L. (1953). Image theory for the structure of quantitative variates.
Psychometrika, 18, 277-296.
Guttxnan, L. (1954). Some necessary conditions for common-factor analysis. Psychometrika, 19, 149-161.
Guttman, L. (1955). The determinacy of factor score matrices with implications for five
other basic problems of common-feature theory. British Journal of Statistical
Psychology, 8, 65-81.
Guttman, L. (1956). "Best possible" systematic estimates of communalities. Psychometrika, 19, 273-285.
Guttman, L. (1965). A faceted definition of intelligence. Scripta Hierosolymztana, 14,
166-181.
Guy, W. A. (1839). On the value of the numerical method as applied to sci~ance,but
especially to physiology and medicine. Proceedings of the Royal Statistical Society,
Series A , 2, 25-47.
Hacker, P. M. S. (1972). Insight and Illusion: Witigenstein on Philosophy and the
metaphysics of experience. Oxford: Oxford Universit:? Press.
Hempel., C. G. (1945). Science and the logic of confirmatia~n.Mind, 54,l-26; reprinted in
C. G. Hempel (19651, Aspects of scienti@ explanation. New York: Free Pness.
Hogben, L. (1957). Statistical theory. London: George Allen & Unwin Ltd.
Hiibner, K. (1983). Critique of scientifio reason. (Dixon, P. R. Jr. t Dixon, H. M. Trans.).
Chicago: University of Chicago Press. (First published 1978).
Hume, D. (1969). A treatise of human nature. (E. C. Mossner, Ed.). Baltimore: Penguin
Books. (First published in 1739 and 1140).
Janik, A., & Toulmin, S. (1973). Wittgenstein'p Vienna. New York: Simon and Schuster.
Kant, I. (1958). [The critique of pure reasopl. Translated by N. K. Smith. New York:
htodern Library. (Abridged edition) (First published in 1781 and revised in 1787).
Kant, I. (1977). [Prolegomena to any future m~~tuphysics
that will be able to come forward
as science]. Translated by P. Cams and revised by J. W. Elllington. Indi,snapolis:
Hackett Publishing Co. (Original work wblished in 1783).
Laplace, P. S. (1951). A philosophical essay on probabilities. (F. W. Truscott and F. L.
Emory, trans.). New York: Dover. (Originally published in 1796).
Laudan, L.(1981). Science and hypothtlsis. Pqrdrecht: Reidel.
Lord, F. M. & Novick, M. R. (1968). Statisticnl theories of mental test scores. Reading,
BIA: Addison-Wesley.
Losee, J. (1980). A historical introduction to the philos~~vhy
of science. Oxford: Oxford
University Press.
Magill, F. N., & McGreal, I. P. (1961). Masterpiet:es of World Philosophy in summary
form. New York: Harper & Row.
McDonald, R. P., & Mulaik, S. A. (1979). Det~rminacyof common factors: a nontechnical
JULY 1987
review. Psychological Bulletin, 86,297-306.
McKeon, R. (Ed.). (1968). The basic works of Arisistotle. New York: Random House.
Mill, J . S. (1891).A system of logic. London: Longmans, Green & Co. (First published i n
1843).
Mulaik, S. A. (1980 November). A critical history of the origins of exploratory statistics in
British empiricism. Paper presented to the annual meeting of the Society for
Multivariate Experimental Psychology, Ft. Worth, TX.
Mulaik, S. A. (1985). Exploratory statistics and empiricism. Philosophy of Science, 52,
410-430.
Mulaik, S. A. (1987).Toward a conception of causality applicable to experimentation and
causal modeling. Child development, 58,1842.
Mulaik, S. A., & McDonald, R. P. (1978). The effect of additional variables on factor
indeterminacy i n models with a single common factor. Psychometrika, 43, 117-192.
Pearson, K. (1901). On lines and planes of closest fit to systems of points i n space.
Philosophical Magazine, Ser. 6 , 2 , 559472.
Pearson, K. (1903). The law of ancestral heredity. Biometrika, 2, 211-229.
Pearson, K. (1911). The Grammar of Science. (Part I-Physical). London: Adam & Charles
Black. (Original edition 1892).
Porter, T . M. (1985). The mathematics of society: variation and error in Quetelet's
Statistics. British Journal for the History of Science, 18, 51-69.
Quetelet, M. A. (1849).Letters addressed to H. R. H. the Grand Duke of Saxe Coburg and
Gotha, on the theory of probability as applied to the moral and political sciences.
(0.G. Downes, trans.) London: Charles & Edwin Layton. (Originally written i n
1837 but published as Lettres . . . s w la theorie desprobabWites. Brussels: 1846).
Rorty, R. (1982).Consequencesofpragmatism. Minneapolis: University of Minnesota Press.
Schonemann, P. H. (1971).The minimum average correlation between equivalent sets of
uncorrelated factors. Psychometrika, 36, 21-30.
Schonemann, P. H., & Wang, M. (1972). Some new results on factor indeterminacy.
Psychometrika, 37, 61-91.
Schonemann, P. H., & Steiger, J . H. (1976). Regression component analysis. British
Journal of Mathematical and Statistical Psychology, 29,175-189.
Schouls, P. A. (1980).The imposition of method: A study ofDescartes and Locke. Oxford:
Clarendon Press.
Spearman, C. (1904). General intelligence, objectively determined and measured.
American Journal of Psychology, 15,201-293.
Steiger, J . H., & Schonemann, P. H. (1978).A history of factor indeterminacy. In S. Shye
(Ed.), Theory construction and data analysis in the behavioral sciences (pp. 136-178).
San Francisco: Jossey Bass.
Steiger, J . H.(1979). Factor indeterminacy in the 1930's and 1970's: Some interesting
parallels. Psychometrika, 44, 157-167.
Suppe, F. (Ed.). (1977). The structure of scientific theories (2nd ed.). Urbwa: University
of Illinois Press.
Thomsoh, G. H. (1916). A hierarchy without a general factor. British Journal of
Psychology, 8, 271-281.
Thom$bn, G. H. (1919).The proof or disproof of the existence of general ability. British
Journal of Psychology, 9,321444.
Thursltone, L.L. (1937). Current misuse of the factorial methods. Psychometrika, 2,
73-76.
Thurstone, L. L. (1947). Multiple factor analysis. Chicago: University of Chicago Press.
Toulmin,
S., & Goodfield, J . (1962).The architecture of matter. New York: Harper and
Row.
Tuomela, R. (1985). Science, action and reality. Dordrecht: D. Reidel. Urbach, P. (1982).
Francis Bacon as a precursor to Popper. British Journal for the Philosophy of
science, 33, 113-132.
von Wright,G.H. (1951).A treatise on induction andprobability. London: Routledge and
Klegan Paul.
Waismann, F. (1965). The principles of linguistic philosophy. (R. Harre, Ed.) London:
-.--.
.
.
.
.
. --
Whewell, W . (1966).The philosophy of the inductive sciences founded upon their history
(2nd ed.). New York: Johnson Reprint Corporation. (Originally published in 1847).
304
MULTIVARIATE BEHAVIORAL RESEARCH
Mulaik
Whewell, W. (1984).Elkana, Y. (Ed.) Selected writings on the history of science--William
Whewell. Chicago: Univ. of Chicago Press.
Wilson, E. B. (1928).Review of The abilities ofman, their nature and measurenzent by C.
Spearman. Science, 67,244-248.
Wilson, E. B., & Worcester, J. (1939).Note on factor analysis. Psychometrika, 4,
133-148.
Wittgenstein, L. (1953).Philosophical Investigations. New York: MacMillan.
Yeo, Richard. (1985).An idol of the market-place: Baconianism in Nineteenth Century
Britain. History of Science, 23, 251-298.
Yule, G. U. (1909).The applications of the method of correlation to social and economic
statistics. Journal of the Royal Statistical Society., 72, 721-730.
Yule, G. U. (1911).An introduction to the theory ofstatistics. London: Charles Griffin &
Go.
Zimbardo, P. G. (1985).Psychology and Life. Glennview, IL: Scott, Foresman & Co.
JULY 1987
305