Home » Philosophy

Walls of Glass

Metaphors of Personal Identity in Derek Parfit and Teresa of Ávila

Personal identity over time is an idea derided by analytic philosophy.  Hume began the process of debunking the person, or self, as “nothing but a bundle or collection of different perceptions.”[1]  The demolition job concluded in 1984, with the publication of Derek Parfit’s Reasons and Persons.  Parfit (1942–2017) found liberation in dispensing with personal identity—shattering the “glass tunnel” that kept him apart from life.  Parfit uses the metaphor of glass walls to express personal identity as a constraint.  Four hundred years earlier, by contrast, St. Teresa of Ávila (1515–1582) used glass walls to express personal identity as a luminous, faceted beauty—an interior crystal castle where God lives within us.  The difference between the philosophers and their vitreous metaphors is that Parfit’s person is a self, and Teresa’s is a soul.

Parfit: Under a Dark Glass

In Reasons and Persons, Parfit conducts thought experiments like teleportation and brain switching to show that personal identity over time could only be true if there were a “further fact” beyond our psychological states, which “owns” or “holds” those states.  But there is no further fact beyond the psychological states, nothing beyond the Humean bundles of perceptions, memories, and the like.  Therefore, there is no personal identity, no self, over time.  As a consolation, Parfit shows us that personal identity doesn’t matter anyway, because all we really care about is surviving in some way or other.  To render Parfit’s meticulous examples crudely but briefly: if the cross-galaxy teleporter was buggy one day, and only deposited a very close replica of my body, brain, and brain states—not my original, self-same molecules and brain states—I could learn to live with it.  I might even enjoy it, as Parfit enjoyed life once he stopped worrying about personal identity:[2]

When I believed my existence was such a further fact, I seemed imprisoned in myself.  My life seemed like a glass tunnel through which I was moving faster and faster, and at the end of which was darkness.  When I changed my view, the walls of my glass tunnel disappeared.  I now live in the open air.  There is still a difference between my life and the lives of other people.  But the difference is less.

Parfit did not dismiss the human concern for personal identity.  In an interview, he recognized that people want a “guaranteed identity,” a way to exist as ourselves over time.  He also noted that this desired identity “would be true if what each of us really was, was a soul.”[3] But Parfit dismissed the existence of souls, which he called a type of “further fact” beyond our immediate and verifiable sense experience.  Without a soul, we do not have clearcut individual identities over time.

Obviously, Parfit thought “soul” had huge explanatory power: It could explain personal identity over time.  But access to this power required a leap of faith, something Parfit felt unprepared to do.  In contrast, Teresa of Ávila leapt without hesitation.  She didn’t rejoice over the shards of a broken self, but built a lasting identity—a soul—from the most brilliant glass.

St. Teresa: Luminous Walls

Teresa began her masterwork, Interior Castle, by sharing her vision of the soul as a castle made of “very clear crystal” or “a single diamond,” in which there are “many rooms, just as in heaven there are many mansions.”[4]  Teresa identifies seven rooms or abodes (moradas), each with its own ethical and spiritual significance, with God fully present and experienced in the seventh.  The crystal castle is the soul—the place where God abides.  But soul also enters and travels though the rooms of the castle. Teresa expressed this duality throughout Interior Castle.  She also put it in one of her poems, “Alma buscarte” (“Soul, Seek Yourself”).  In this poem, God calls the soul “my chamber, my dwelling, and my house” (soul as castle), but also gives counsel to the soul if it “gets lost inside my tinted caverns” (soul entering and traveling through the castle).[5]

This is an apparent, rather Parmenidean contradiction—how can a soul both be a thing and enter that thing?  Teresa directly addresses the contradiction as the very nature of what we might call the search within—a feature of human self-consciousness also integral to prayer and meditation:[6]

Now let us return to our beautiful and delightful castle and see how we can enter it.  I seem rather to be talking nonsense, for, if this castle is the soul there can clearly be no question of our entering it.  For we ourselves are the castle: and it would be absurd to tell someone to enter a room when he was in it already. But you must understand that there are many ways of “being” in a place…You will have read certain books on prayer which advise the soul to enter within itself: and that is exactly what this means.

This passage suggests Teresa’s concept of personal identity over time.  Identity is a function of one’s soul being a durable, beautiful, and complex edifice, which also possesses the unusual quality of being able to search itself and, in so doing, meet God.  The soul is the structure that owns or holds consciousness (a further fact, per Parfit).  Self-consciousness provides a two-fold proof of persistence over time: first, the continuity of inner experience in the temporal sequence of entering and traveling through the soul; second,  the experience of an everlasting, indwelling God. As Teresa wrote in her poem, “Hermosura de Dios” (“God’s Beauty”), God has the astounding ability to bind the eternal to the mortal: “What a knot you tie from two unequal things…you join the mortal—what need not be, with that must be—Eternal.”[7] Discovering God within shows all seekers that we connect to the immortal, and so we endure.

Teresa, like Parfit, imagines darkness as well as clear glass.  But unlike Parfit’s tunnel of glass leading to darkness, Teresa’s darkness is the muck full of venomous creatures outside the crystal castle.  This is the state of sin.  Before the soul enters itself, it lives outside itself in sin; to enter the crystal castle is to start renouncing sin and embarking on the search within.  There is comfort in Teresa’s metaphors, because they suggest our identity is not obliterated by sin.  We always have the soul as structure, the castle within us, with God in residence.  In sin, the soul has not yet entered its own lovely dwelling—but the option is always there, beckoning with its brightness.

Sorrowful Self, Delighted Soul

Of course, history, circumstance, and religious practice enabled Teresa to think about the soul in ways that Parfit found impossible.  In unpacking personal identity, the most Parfit could seriously contemplate was the self, defined by such things as bodily continuity, memory, or psychological connectedness.  All of these failed to pan out when mined for the “further fact” of personal identity, as would be expected, per Hume, from mere collections of sense data, impressions, and ideas.  But Parfit knew what Teresa knew: the optimal further fact of personal identity is not the self, but the soul.  The difference is that Parfit rejected the soul, while Teresa embraced it.

It is fascinating to consider that Parfit and Teresa, separated by four centuries, both chose glass as metaphor to express personal identity.  The sense that a transparent barrier individuates me from others is a common modern feeling, and perhaps it was in Teresa’s day also.  The key point is that Parfit and Teresa lent opposing meanings to the metaphor: Parfit’s glass tunnel defines personal identity as the miserable prison of self, best demolished; Teresa’s crystal castle defines personal identity as our glorious, protective home—the soul.  That again is the difference between self and soul.  One is sorrow, the other, delight.


Author Bio

Dana Delibovi is a poet, essayist, and translator who started out as a philosopher.  Her new book of translations and essays, Sweet Hunter: The Complete Poems of St. Teresa of Ávila, will be published on St. Teresa’s feast day, October 15, 2024, by Monkfish Books.  Delibovi’s work has appeared in After the Art, Noon, Presence, Salamander, U.S. Catholic, and many other journals.  She is a 2020 Best American Essays notable essayist and a consulting editor at the e-zine, Cable Street.  Delibovi holds a BA in philosophy from Barnard College and an MA in philosophy from New York University, where she studied with Peter Unger.


[1] David Hume 1740: A Treatise of Human Nature, ed. P.H. Nidditch (Oxford, UK: Oxford University Press), 252.

[2] Derek Parfit 1984: Reasons and Persons (Oxford, UK: Oxford University Press), 281.

[3] 1996: “Derek Parfit on Personal Identity.” Available at https://youtu.be/d7asDhjj7Xk?si=vntx-TXL9VHtwDWb.  Accessed 11 March 2024.

[4] St. Teresa of Ávila 1577: Interior Castle, ed. and trans. E. Allison Peers (New York, NY: Image Books, 1961), 28.

[5] St. Teresa of Ávila, 1576: “Alma, buscarte has en mí” / “Soul Seek Yourself in Me,” trans. Dana Delibovi, U.S. Catholic 87, no. 9 (2022): 15.

[6] Teresa 1577: Interior, 31.

[7] St. Teresa of Ávila 1577: “Hermosura de Dios” / “God’s Beauty”, trans. Dana Delibovi, U.S. Catholic 87, no. 3 (2022): 21.


Bellarmine on the Defeat of the Devil

In his meditations upon the seven last words of Christ, spoken from the cross, St. Robert Bellarmine (1542–1621) offers a series of reflective considerations most apt for this season, not only for the Christian, but for all who would think carefully on the meaning of life, death, and the universe. Particularly poignant in this, a time of irresponsibility—that is, a time in which every fault is deflected to some cause other than our own wills—is the following reflection on the sixth word, “It is consummated.” Here, Bellarmine contemplates the defeat of the devil and its consequences for our moral life.

There is another reason which St. Leo adduces, and we will give it in his own words. “If our proud and cruel enemy could have known the plan which the mercy of God had adopted, he would have restrained the passions of the Jews, and not have goaded them on by unjust hatred, in order that he might lose his power over all his captives by fruitlessly attacking the liberty of One Who owed him nothing” (Tract. sept. et nonag. 62). This is an exceedingly weighty reason. For it is just that the devil should lose his authority over all those who by sin had become his slaves, because he had dared to lay his hands on Christ, Who was not his slave, Who had never sinned, and Whom he nevertheless persecuted even unto death. Now, if such is the state of the case, if the battle is over, if the Son of God has gained the victory, and if “He will have all men to be saved” (1 Tim. 2:4), how is it that so many are in the power of the devil in this life, and suffer the torments of hell in the next? I answer in one word: They wish it. Christ came victorious out of the contest, after bestowing two unspeakable favors on the human race. First that of opening to the just the gates of Heaven, which had been closed form the fall of Adam to that day, and on the day of His victory He said to the thief who had been justified by the merits of His blood through faith, hope, and charity, “This day thou shalt be with Me in Paradise” (Luke 23:43) and the Church in her exultation cries out, “Thou having overcome the sting of death, has opened to believers the Kingdom of Heaven.” The second, of instituting the Sacraments which have the power of remitting sin and of conferring grace. He sends the preachers of His Word to all parts of the world to proclaim, “He that believeth, and is baptized, shall be saved” (Mark 16:16). And so our victorious Lord has opened a way to all to attain the glorious liberty of the sons of God, and if there are any who are unwilling to enter on this way, they perish by their own fault, and not by the want of power or the want of will of their redeemer.

St. Robert Bellarmine 1618: The Seven Last Words from the Cross, 177-78.

“They wish it.” A simple statement and a hard truth. We wish not to be responsible for ourselves. But we are. We have the means to live better; we do not choose them. Unjustly we place blame on society or other forces beyond our control—genetics, upbringing, the economy, and so on. Yet choice remains always open to us. What choices will we make today, tomorrow? We perish by our own fault; the sting of death is removed. We may not wish damnation (or moral weakness) directly, but we pursue myriad desires that are not in keeping with our nature or our true good.

The way to the good, however, remains always open to us.

Language, Non-Existent Objects, and Semiotics

In the 19th and 20th centuries, a fever for scientific explanation of all phenomena gripped many an intellectual. Language, however, has proved resistant to the methods of modern science. Too many aspects of our experience prove irreducible to the empiriometric approach successful in disciplines such as chemistry or biology. This resistance vexes the reductionist’s mind. Most especially have non-existent objectivities—that is, the various ways in which we can talk about objects that do not exist as things—proved a great source of this vexation.

For natural languages, those we use in our everyday efforts at communication, cannot be conformed to precisely denotative maps of conceptual correspondence. As such, many attempted invention of artificial languages. But these artificial languages—although they have proved useful in development of technical apparatus—cannot convey the richness of experience found in our natural languages. They cannot, therefore, “explain scientifically” what those languages accomplish in our experience.

By contrast, let us hear what John Deely has to say about the relationship between language, non-existent objects, and semiotics:

Language Reconceived Semiotically

I hope to show how the semiotic point of view naturally expands… to include the whole phenomenon of human communication—not only language—and, both after and as a consequence of that, cultural phenomena as incorporative of, as well as in their difference from, the phenomena of nature. The comprehensive integrity of this expansion is utterly dependent upon the inclusion of linguistic phenomena within the scheme of experience in a way that does not conceal or find paradoxical or embarrassing the single most decisive and striking feature of human language, which is, namely, its power to convey the nonexistent with a facility every bit equal to its power to convey thought about what is existent.

Let me make an obiter dictum on this point. When I was working at the Institute for Philosophical Research with Mortimer Adler on a book about language (i.1969–1974, a collaboration which did not work out), I was reading exclusively contemporary authors—all the logical positivist literature, the analytic philosophy literature, all of Chomsky that had been written to that date—in a word, the then-contemporary literature on language. And what I found in the central authors of the modern logico-linguistic developments—I may mention notably Frege, Wittgenstein, Russell, Carnap, Ayer, and even Brentano with regard to the use of intentionality as a tool of debate—was that they were mainly intent on finding a way to assert a one-to-one correspondence between language and mind-independent reality and to say that the only time that language is really working is when it conveys that correspondence. In fact, however, much of what we talk about and think about in everyday experience is irreducible to some kind of a prejacent physical reality in that sense. There is no atomic structure to the world such that words can be made to correspond to it point-by-point. Nor is there any structure at all to which words correspond point-by-point except the structure of discourse itself, which is hardly fixed, and which needs no such prejacent structure in order to be what it is and to signify as it does.

It is wonderful to look at the history of science and culture generally from this point of view, which is, moreover, essential for a true anthropology. The celestial spheres believed to be real for some two thousand years occupied huge treatises written to explain their functioning within the physical environment. Other examples include more simple and short-lived creatures that populate the development of the strictest science, such as phlogiston, the ether, the planet Vulcan; and examples can be multiplied from every sphere. The complete history of human discourse, including the hard sciences, is woven around unrealities that functioned once as real in the thinking and theorizing and experience of some peoples. The planet Vulcan (my own favorite example alongside the canals of Mars) thus briefly but embarrassingly turned up as interior to the orbit of Mercury in some astronomy work at the turn of the last century. But Vulcan then proved not to exist outside those reports at all. The objective notion of ether played a long and distinguished role in post-Newtonian physical science—as central in its own way as the celestial spheres were in the Ptolemaic phase of astronomy’s development—before proving similarly to be a chimera.

So the problem of how we talk about nonexistent things, where nonexistent means nonexistent in the physical sense, is a fundamental positive problem with which the whole movement of so-called linguistic philosophy fails to come to terms. This is not just a matter of confusion, nor just a matter of language gone on holiday, but of the essence, as we will see, of human language.

To understand this fundamental insouciance of language, whereby it imports literary elements of nonbeing and fictional characters even into the sternest science and most realistic concerns of philosophy, we will find it necessary to reinterpret language from the semiotic point of view.

John Deely 2015: Basics of Semiotics, 8th edition, 19-20 (all emphasis added).

Commentary

While there are many points worthy of expansion in this brief text, I wish to highlight only three: namely, the three points in bold.

Signifying Non-Existent Objects

First allow me to pick up the last, namely, that “how we talk about nonexistent things, where nonexistent means nonexistent in the physical sense, is a fundamental positive problem with which the whole movement of so-called linguistic philosophy fails to come to terms.” It is a failure, indeed, in a presupposed principle—what we might term the positive formulation of nominalism—namely, that only individuals exist independently of the mind. This nominalist presupposition condemns any believer in it to incoherence. As Deely here hints, language and indeed all communication require a reality of the relation in order to function. If only individuals exist, relations must either be fictions of the mind or themselves individuals. But if relations are individuals, they would be individuals unlike all others—to the point that we would be predicating the term, “individual” equivocally.

Nominalism will prove a ripe topic for another day, however. Instead, let us simply say that its presupposition leaves one unable to draw meaningful connections between existent and nonexistent objects. If one’s theory of language struggles to account for the latter—except to posit them as meaningless—one will be forced, ultimately, to evict all meaning from language, for that theory has failed to recognize the essence of language itself.

The Structure of Discourse

Second, let us consider Deely’s statement that “Nor is there any structure at all to which words correspond point-by-point except the structure of discourse itself, which is hardly fixed, and which needs no such prejacent structure in order to be what it is and to signify as it does.” Within this, I wish to focus on the except clause—that is, the structure of discourse. What is this structure? We might alternatively name it the structure of thought’s expression. Consider a common problem: finding the right words to express yourself. We all experience this from time to time. We fumble in vagueness for not only the right semantic signifiers, but even the right structure in which to array them. Perhaps, we might even feel pressured to creative linguistic expression: coming up with new words or structures in the effort to convey our meaning.

This occasional creative necessity exhibits the lack of fixity characteristic of the structure of discourse. To complete the conception of any given idea, we must bring it forward into expression. If we cannot express it, it remains incomplete. While every concept may be derivative of prior experience and thinking, this dependence does not preclude the new idea. Were that the case, we would have no inventions, no fictional stories. And this brings us to…

The Power of Language

Third, what Deely calls language’s “power to convey the nonexistent with a facility every bit equal to its power to convey thought about what is existent.” This equal power of conveyance bears enormous importance for understanding the psychology of the human person. That we constitute in linguistic objectivity both ‘what is’ and ‘what is not’ alone explains the constitution of all culture. Moreover, it explains how that cultural being can grow up at odds with human nature. It can also explain why some hold the profane as sacred, and why the distinction between fact and opinion (as well as value) are not so absolute as often presupposed.

Fully explaining this power of language takes much more background and exposition than can be provided here. Suffice it only to say that, if we are to understand the functioning of language, we must do so from a perspective which grasps the true breadth comprised within the structure of discourse.

As a final way of articulating the importance, the semiotic point of view, illuminates the development of linguistically-signified meaning from out of the indeterminacy of pre-linguistic experience.

Tradition and Technology

Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness.  Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history.  Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious.  Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible.  As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separated, closed systems was socially and psychically supportable.  This is not true now when sight and sound and movement are simultaneous and global in extent.  A ratio of interplay among these extensions of our human functions is now as necessary collectively as it has always been for our private and personal rationality in terms of our private senses or “wits,” as they were once called.

Marshall McLuhan 1962: The Gutenberg Galaxy, 6.

To say that we live in unusual times would be an understatement.  Certainly, every age has its own unprecedented happenings, many of which are precipitated by technological advances.  But in the present iteration of the electronic age—that which we can fairly call “digital”—it appears that technological advance and use no longer occurs as separate from the lives of human beings.  In truth, they have never occurred with such separation.  But today, technologies’ essential function, namely the extension of our natural faculties, has in a way exceeded the proportions set by nature.  This technological disproportion presents an unprecedented challenge.

The rapid unfolding of this unprecedented disproportion, as apprehended (but not understood) in the Western-cultural world, has led to many abandoning their intellectual traditions.  This abandonment comes with hope or desperation for new solutions to the problems (such as endemic tendencies towards psychosis, generative intelligence simulators [mistakenly named “artificial intelligence”], the rapid fragmentation of opposed political ideologies, and global economic precarity with instantaneous consequences) which now threaten our civilizations.  But this abandonment itself misperceives the persistent root underlying these newly-emergent problems—a root which is not a problem itself, but a difficulty with which we as human beings must struggle: namely, understanding human nature.

For this understanding, we are fools not to turn with repeated humility to the great works of our tradition.

What is Tradition?

This question—“what is tradition?—proves surprisingly difficult to answer beyond providing the most basic definitions and descriptions.  But as a fundament for any good response to the question, it must be stated that tradition universally consists in the “handing down” of beliefs and behaviors to others.  Tradition proves therefore both something communicated and something essential to communication.  Every word read off a page or spoken aloud presupposes a common linguistic tradition, not only of the particular letters, shapes, or sounds by which the meanings are conveyed, but of those meanings as well.

Thus, we build traditions not only by the things we use, but by the thoughts with which we inform those things.  Put in other words, tradition always finds itself infused with symbols: conventionally-appointed signs which convey universal ideas.

Traditional Signs and the “Idea” of a Tradition

Behind this identification of symbols and tradition lies a deep inquiry into semiotics (the study of the action of signs).  We do not need to make this inquiry, however, to observe the truth that tradition and symbols are intimately related.  All we need is a little reflection.

Think, for instance, of long-enduring religious practices.  One might think of the Catholic Mass, whether in Roman or Orthodox rites.  Here, symbols abound—not only in the appointments of a church building (stained-glass windows, statues, altars, tabernacles, and so on), not only in the vestments of priests and servers (cassock, alb, amice, cincture, stole, chasuble), nor even in their particular adornments and imagery, but also in countless actions and words (every noun, verb, adjective, adverb, conjunction, and preposition including some symbolic signification).  Conveyed thereby are not only millennia of gradually-accumulated practices, but a thinking-through of how we ought to behave with regard to the sacred.

Or, as a very common form of traditional symbolism across varied cultures, consider the practice of vestment: a tradition found not only in Catholicism but in, for instance, Zen Buddhism.  Japanese practitioners are, upon entry into their monastic life, garbed with the kesa, reception of which symbolizes receipt of the Buddha’s teaching and worn throughout daily rituals to remind of one’s commitment.  The meaning of the garment is much more than the garment itself, just as a priest’s opening of the antiphon—Introibo ad altare Dei, “I will go in to the altar of God”—signifies much more than an intent to ascend the stairs of the sanctuary.

We might further think of more common cultural practices that also have a clear symbolic meaning: such putting up decorations for holidays (whether retaining spiritual depth or not), giving gifts on birthdays, eating a large and plentiful meal on Thanksgiving (in the United States, at least), even the act of shaking hands with someone—each means more than the act itself.  We decorate for a holiday not only because we like to make our homes more attractive for a time (do we want our homes unattractive the rest of the year?) and we do not decorate however we please to celebrate the holiday, but in a way that is in keeping with the holiday celebrated.  Putting up pumpkins at Christmas would be quite bizarre, regardless of one’s religious beliefs.  So too, a Christmas tree does not belong at a Fourth of July party.  We give gifts on birthdays not because we are rewarding the person celebrated, but because we wish to convey our joy at his or her life, to commemorate another year of being-together and hopes for the year to come.  Consuming a large meal at Thanksgiving does not celebrate gluttony (even if often it may turn out that way), but expresses gratitude for life itself, with food that not only nourishes but delights.  Shaking hands not only greets the other, but expresses an intention towards that other (and principally, we intend to signify a spirit of cooperation—though an aggressive handshake might signify otherwise).

If we think a little more, we will realize that we participate in traditions through their symbols on an almost daily basis: in prayer, in conversation, in reading, in writing, in almost any interaction with any other human being, we will engage in some symbolic signification of something above and beyond the here and now moment.

Intellectual Traditions

Of particular importance for the Lyceum Institute are intellectual traditions.  An intellectual tradition comprises symbolically-conveyed relations of beliefs which have been handed down from earlier thinkers.  To give an example, we can take a word we just used: namely, “belief”.  What do we mean by this word?  To some, it may signify faith or religious / personal conviction.  Here, however, it is being used in the tradition of Charles Sanders Peirce, who defines it (to paraphrase) as “conviction in the truth of a proposition so as to act in accordance with it when the occasion arises”.  I believe that spilling water on myself is a nuisance, and so I act in a manner that attempts to prevent spilling water on myself.  I believe that truth is a good to be shared, and so, when the opportunities present themselves, I attempt to share the truth.  I believe that C.S. Peirce has insightful things to say, so I try to read his works.  Each of these beliefs shapes my action, because my conviction is not only that they are true, but that the truths they convey are good.

The purpose of an intellectual tradition is to hand on the truths which produce convictions that turn into beliefs.  We uphold an intellectual tradition because we find that it reveals the intelligible truths of being and, in the beliefs it fosters, we are motivated to actions that are good.  These intellectual traditions can be scientific, theological, literary, historical, artistic, religious, and anything in-between.  The Shakespearean sonnet, for instance—a specific metrical poetic form—belongs to an intellectual tradition inasmuch as this form itself, not independently of but irreducible to the content, signifies something beyond itself.  Likewise, the practice of modern scientific methodology forms part of an intellectual tradition, inasmuch as it is believed to discover and indicate explanations for observed phenomena.  So too, the religious practices of churches and temples alike all are informed not only by a tradition of practice but also of intellectual understanding and likely of some theological belief—however well or poorly formed that understanding may be.

But that literary, scientific, or theological traditions are formed well—this requires a kind of synoptic, holistic, and fundamental perspective: a perspective which can be formed only through philosophy.

Philosophical Traditions at the Lyceum

Just as with the above disciplines, philosophy, too, both produces and develops within intellectual traditions.  Unlike those mentioned above, however—although a certain exception must be made for theology—philosophy encompasses the whole of human experience, including that which is pursued in all other intellectual pursuits.  Nothing falls outside of its domain.[2]

In light of this truth, perhaps no intellectual traditions have as fundamental an importance for our earthly lives as those of philosophy, for it is within and through philosophy that our beliefs about diverse matters can be resolved into a unity.  As these philosophical resolutions gradually grow—one truth illuminating another, another dissolving a false opinion, and yet another coming from the connections drawn between the truth and falsity, and so on—they form a tradition.  Put otherwise, a philosopher establishes some premise as a principle.  From this premise, further conclusions are drawn.  These relations of premises and conclusions are taught to others, students.  These others discover yet further meanings in light of the earlier thinking.  Often, the teachers and students alike write down their thinking.  Thus, the tradition grows not only from mouth to ear, but from pages through eyes.

As more is written—and as traditions come into conflict with one another—their reception becomes increasingly complex.  If we do not read Plato himself, but only what is said about him by others, we do not truly know Plato’s thought, even if those others are accurate.  Conversely, if we read only Plato himself, we inevitably will miss certain truths about his thinking that others have perceived and explained.  Doubtless we will discover with little enough reading, commentaries upon Plato often conflict with one another.  It belongs to a student of Platonic thinking, then, not merely to receive the tradition’s conclusions but, much more poignantly, to re-think its questions.

Such is the approach to philosophical traditions taken at the Lyceum Institute.  We give certain traditions—Platonism, Aristotelianism, Thomism and Scholasticism generally, Peircean semiotics, certain thinkers within phenomenology and hermeneutics—greater emphasis than others because the questions they ask, and the answers provided to them, have proven better explanations than those given by those others.  Because philosophy asks perennial questions, and because its answers are not like the solutions to simple mathematical equations, one cannot simply appropriate a tradition; one must live within it.  Doing so develops a philosophical habit, and it is through this habit that we are able to face new challenges and difficulties.

What is Technology?

Among the emphasized philosophical traditions mentioned above, one will find commonly a tendency towards what can call realism: that is, simply put, the belief that our knowledge, at least in part, really is of things as they are in themselves.  Many other philosophical traditions are not realist, in at least some one or another important way.  Whether one is a realist or an anti-realist will change how one understands technology: for the latter, since human nature itself remains essentially unintelligible, technology can only be a construct of our own making.  For the former, the realist, technology can instead be understood as an extension of human faculties.

Asking the Right Questions

This notion—that technology extends our faculties—requires, of course, that we understand what those faculties are and how they function.  The author quoted at the outset of this article, Marshall McLuhan, dedicated much of his career to discovering the relations between diverse kinds and instruments of technology and the human sense faculties.  His seminal 1964 book, Understanding Media: the Extensions of Man, considers (among many others) as such technologies: spoken and written words, clothing, money, printing, photographs, automobiles, games, movies, radio, television, and automation.  As each use of one technology increases, McLuhan argues, the ratio of our senses is changed.  Some pull us into more of a visual modality; others, auditory; others still, the tactile.

Through the alteration of these ratios, we alter also the environments of our human and especially social living.  We human beings have been shifting the ratio of our senses since before recorded history.  Indeed, such shifts were required to invent the means of recording: the development of languages and the means of their preservation as an extension of memory.  But while we can conceive easily enough certain superficial extensions of our faculties—writing an extension of memory, photographs an extension of sight (allowing us to see things from the past)—the complex interplay of these technological extensions often eludes our awareness. 

Allow me to suggest that this elusiveness belongs not to the technological devices or products, but rather to the fact that technologies never exist independently of human beings.  That is, technologies come into being through human invention, yes; but more importantly, they operate as technologies only in relation to some human purpose.  Automate a technology to continue past all human existence, and it may continue to function.  But will it continue to function as a technology?

In other words: what makes a technology to be a technology?  What do we really mean when we say the word, “technology”?  Of course this question is not new.  But do we have (have we ever had?) the right intellectual traditions to answer it?  Can we incorporate such answers into a philosophical tradition?

Technology and Human Environments

As we have already mentioned, technological innovation and use alters—we may even say, to some degree, constitutes—the environments of human experience.  Connections between the advent of automobiles and the development of suburbs, for instance, are well known.  To many, this exodus from the urban life of city and town has a double concern: first, the evacuation of the cities themselves.  As McLuhan writes in Understanding Media, “There is a growing uneasiness about the degree to which cars have become the real population of our cities, with a resulting loss of human scale, both in power and in distance.”[3]  As anyone who has lived in or near a city or sizable town can likely attest, the car presents a struggle: it requires much real estate for driving and for parking—increasing both horizontally and vertically the expanse of concrete, pushing out shops, stores, restaurants; destroying neighborhoods and communities alike.

But secondly, the remove of persons to suburban environments likewise has an effect on our psychology, as well.  It makes us much more private.  Our suburban neighbors are always there—but merely there.  We may work in entirely different directions; we may commute long distances; we may shop at different stores; our kids may go to different schools; we may have naught in common as to the conduct of daily life but for the fact that we live on the same street.  The destruction of human scale inhibits our formation of communities; and the lack of communities affects the mind of each human being who needs such community in order to thrive.

The impact of technological developments upon our lives, in other words, consists not only in a physical reshaping of the environment, but also the psychological restructuring, which always plays a role in environmental constitution.  How we look at things—how we think about them—has a way of changing how they fit into our lives.  But if technology changes how we look at and think about things, how we hear what they have to say, then clearly it is also important that we perceive and think correctly about technology, as well.

Digital Paradigm of Technology

In the past several decades, a new paradigm of technology has become increasingly prevalent in this, the electric age: namely, the digital.  Few have sufficiently considered the weight of this shift.  While the first several decades of electricity saw communication transferred primarily through analog means—where one medium is used to represent one or another, but with a physical limitation that constrained the suitability of instruments (e.g., you could not produce a photograph on a vinyl record, or record sound in a Polaroid)—the increasing translation of records into digital formats has radically altered the human environment of today.

Where previous technological innovations have altered the ratios of our faculties, that is, the digital has altered it beyond all proportionality.  It homogenizes all data: images, sounds, representations of tactility, relations and patterns of relationships.  It captures with deceitfully-perfect seeming-fidelity not only the real, but so too the fake; fact and fiction become, in a digital paradigm of preservation and re-presentation, increasingly indistinguishable.  We may witness this through the big-budget cinematic film, in which the digital creation of imagery and sound has become increasingly difficult to distinguish, as to what belongs really to things themselves and what was created through some other means.[4]  As we blur lines between reality and fantasy, we damage the faculties of perceptual distinction upon which we intellectually rely.  Reciprocally, the less we strive to develop our intellectual habits, the more damaging these sensory distortions become.

As the digital permeates our environments ever deeper—integrating into our homes, our devices, our communications, ever-present through one or another screen, always ready-to-hand through the phones in our pockets—we urgently need to ask: how do we understand these technologies?  How do they fit coherently into human life?

Philosophical Tradition and Technology

Many propose responding to this question with the Luddite answer: eliminate the technology, either in itself or from your life; disconnect your homes and your devices, sign off of your accounts, live in a technologically-minimalist way.  Doubtless, this answer appeals to many.  Modern life causes no shortage of exhaustion and a retreat from its technological instruments promises a desirable rest.  But though this may prove a solution to the problem of one’s own individual living, it does nothing to resolve the essential and essentially-human difficulty of technology.  Fleeing from technology will not give us understanding of it, and thus—sooner or later, in our lives or those of generations yet to come—technology will grow again.

Instead, to handle the difficulty, we need philosophy.  But what we need more than merely a set of philosophical doctrines.  We need a philosophical tradition that instills in us a habit of careful thinking about the phenomena that not only surround us, but that shape and constitute the environment in which we live.

Thoughtful Engagement of the Digital Paradigm

Most especially do we need this habit of thoughtful reflection within the digital paradigm.  As mentioned above, the weaker our habits of thinking, the more damaging we may find this most-pervasive of technological developments.  Most readers of this essay, it is expected, are wary of social media: its effects on the psychological well-being of youth have become a hot topic no less than the proliferation of “fake news”.  But it influences us in more subtle ways, as well, providing us not only with unhealthy self-images or untrue claims, but also changing our very patterns of thinking.

It may seem silly, perhaps, to look for answers about how we ought to live in the digital age from thinkers who died centuries ago, thinkers who never experienced technologies or modes of life quite like our own—thinkers like Aristotle and Aquinas.  But although the particulars of our own day differ from these thinkers of antiquity, their insights into the universal truths of the human being remain ever-pertinent, and, if we can engage these traditions thoughtfully, we will find ways to bring their insights into an illuminating dialogue with the unique particulars of our own day.

The first task for a philosophical tradition appointed to initiating such a dialogue is the articulation of technology’s essence.  The Lyceum Institute is taking up this task in a year-long project, Humanitas Technica, which will run throughout 2024, including an extended and expansive seminar to take place in the Fall.  We have already begun preliminary conversations—covering how our relationship with technology has gone wrong, the conception of technology held by those responsible for creating it, and begun a preliminary consideration of technology’s definition.  These preliminary conversations serve to illuminate the questions still to be asked.  But primarily, they have shown that most of our technologies—even those that seemingly concern naught but the change of physical entities—modify our relations of communication.

Questioning Presuppositions

This centrality of communication to all technologies brings to light two common presuppositions: first, that technologies are principally instruments; and second, that they are inherently neutral in themselves and only incidentally used for good or evil.  Recasting the conversation about technology—such that we consider technological instruments not principally through what they do according to their own forms but what we do through them in our most human capacities, and how this activity reverberates into us—challenges both these presuppositions.

To unfold this claim for recasting the conversation requires both the strength of a philosophical anthropology, such as that found in the Aristotelian-Thomistic tradition, and the robust understanding of signification found both in late developments of Latin Thomistic thinkers and the semiotics of Charles Sanders Peirce.  These latter considerations—concerning signs, symbols, and their interpretation—will allow us not only to apply our traditions to the technological developments of recent decades or those to come in the future, but to fold technology itself into these traditions.  What is a smartphone?  A piece of protective equipment?  A new medicine?  To understand such innovations, we cannot ask only what they do, but how we understand them.  For this, semiotics will prove essential.

Many other contributions across diverse philosophical traditions of realism, no doubt, will find their place in these conversations as well.  But regardless of the insights’ sources, it is a conversation which needs to be had and one which indeed requires recasting.  The growing cultural problems—worry over which fills our publications, our daily discussions, the anxieties which gnaw at the souls of parents and teachers alike—all appear exacerbated by the technological environment we now inhabit.

Traditions of Philosophical Realism and Technology

Much more remains to be said on this topic, and I have little doubt that the many fine people involved in Humanitas Technica will have much worthwhile to say, but allow one final point here.

If we cannot know things as they are independently of our minds—know them not only according to their sense-perceptual, empirically-observable, mathematically-calculable attributes but according to their intelligible and universal essences—then technology may have all the effects mentioned above, but we cannot know or discern their causality.  From any anti-realist perspective, our concern with technology cannot but become one that is merely instrumental, and, ultimately, which aims at using technology for dominance.

By contrast, the traditions of realist philosophy, which not only possess the capacity to unveil technology’s essence but also to discover its possible coherences and incoherencies with human nature, enables us to use technology well.  This good use can come only through inculcating the philosophical habit.  Such a habit, which enables us not only to handle the digital paradigm but to navigate future difficulties as well, comes not through a set curriculum of courses, nor through receiving the right information, nor even through studying the right figures, but from continuing to question.


[2] So too, theology: but the subject matter of theology, properly speaking—at least according to the Catholic intellectual tradition—is provided by divine revelation; and thus, to enter into its study properly, one must possess a certain faith, for all things as they fall under the umbrella of theological study resolve not to human experience, but to the divine eschaton.

[3] 1964: Understanding Media, 293.

[4] In truth, the sound of films has almost always been created by something other than what is represented through the screen—the industry having relied for long upon what is termed Foley art to make sounds more convincing than those that can be captured by on-set microphones.

Hervaeus Natalis and Logic

Ho ho ho… Harvey is coming to town?

One of the many fascinating contributions semiotics makes to contemporary philosophical discourse is role it sees for signs and sign-relations in the domain of logic. In this interview on Dogs with Torches, we are joined by the Lyceum’s very own Dr. Matthew K. Minerd to discuss the scholastic development of logic in the 13th and 14th century, as well as the thought of Hervaeus Natalis on the domains of logic as the study and science of second intentions. 

Towards the end of the episode, we also discuss the reflections Natalis has for the domain of ens rationis in general, and the possible implications it has for the scope of metaphysical enquiry. We also touch briefly upon other philosophical issues such as: species-specific extrinsic denominations, moral being, rhetoric, zoösemiotics and phytosemiotics, and the being of intentionality.

In addition to the Interview, Dr. Minerd also graciously recommended further resources for those who would want to further investigate medieval developments on logic, as well as the development of the scholastic understanding of ens rationis in general.

On Modernity, Ultramodernity, and Postmodernity

If you and I are to have a conversation—that is, a kind of living-together through discourse or the concrete articulations of language—we must, indeed, do so in the same “place”.  Put otherwise, we cannot have a conversation unless the objects signified by our words are the same.  I do not mean that each and every word used by each person in the conversation needs to have the exact same denotations and connotations as understood by every other person in the conversation.  But without some common referents—some formal identities between us as to what the words we use mean—we are not and cannot be engaged in a dialogue, but only exchanging equally incomprehensible monologues.

It is to the end of establishing some of these necessary commonalities, particularly concerning the meaning of modernity and thus the elaborated forms of ultramodernity and postmodernity, that this article has been written.

The Common Understanding of Postmodernity

Let us begin by asking: what is postmodernism?  Commonly, the term is used to indicate a movement committed to the radical abdication of belief in ultimately-defensible intelligible meaning.  James Lindsay, for instance—who attained some fame through his work with Peter Boghossian and Helen Pluckrose in exposing the absurdity in a lot of academia—has frequently referred to an “applied postmodernism” identified with social justice activism.  By this phrase is meant: the truth about things is less important than the imposition of moral categories based on emotional responses, many of which have been fostered through radical historicism or selective criticism of the structures common to Western civilization.  James Croft, University Chaplain and Lead Faith Advisor to the University of Sussex—with a EdD in Education and Human Development, who describes himself as a gay rights and Humanist activist—describes postmodernism as comprising positions “anti-foundationalist”, “anti-essentialist”, “anti-teleological”, and “anti-universal”. 

Academic Understandings

But is this throwing-around of terms in the public sphere, without careful attention, truly indicative of what postmodernism is or what it is understood to be among its adherents, advocates, and expositors?  In a sense: yes.  The Stanford Encyclopedia of Philosophy begins its entry on “postmodernism” by writing:

That postmodernism is indefinable is a truism.  However, it can be described as a set of critical, strategic and rhetorical practices employing concepts such as difference, repetition, the trace, the simulacrum, and hyperreality to destabilize other concepts such as presence, identity, historical progress, epistemic certainty, and the univocity of meaning.

Further, the first prominent instance of the word in academic discourse seemingly belongs to Jean-Francois Lyotard’s 1979 book, La condition postmoderne, published in 1984 under the English title, The Postmodern Condition: A Report on Knowledge.  In Lyotard’s “simplification to the extreme”, he defines “postmodern as incredulity toward metanarratives” (xxiv).  As he states, the incredulous turn began around the turn of the 20th century, as the “rules” for science, literature, and the arts were changed through a seeming radical re-thinking.  One thinks of Andre Breton’s two manifestos on Surrealism, transgressive films like Federico Fellini’s La Dolce Vita or , or some of Hitchcock’s work, compositions of Arvo Pärt, Györgi Ligeti, Phillip Glass, or Brian Eno, the novels of James Joyce or William Gaddis or Thomas Pynchon or Michel Houellebecq—or any of the expressive turns which attempt to convey a meaning, or the possibilities of interpretation, through methods which defy the previously-established norms and conventions of society.

Nascent and oft-unrecognized turns towards this incredulity of narratives in science can be found as early as the mid-19th century, particularly in the establishment of psychology as a discipline independent from philosophy and the shift to an apparently post-Aristotelian logic.  Though ostensibly grounded in the modern traditions of science—with their focus upon the experimental and the quantitative—these “new” approaches further untethered the objects of thinking from the mind-independent real.  The development of these, and other-like sciences, led to a further fragmentation of intellectual advance and the present-day irreconcilability of the disciplines[1] as well as the widely-known “replication crisis”—not to mention, opened the door for events such as the Sokal Hoax or the “grievance studies affair”.  Some might take these latter as evidence that the social sciences are insufficiently scientific.  Their simultaneity with the replication crisis shows, however, a deeper problem about the condition of knowledge—precisely the point articulated by Lyotard in his 1979 work.

The Public Face of the “Postmodern”

The fragmentation of knowledge and dissolution of narrative foundations has found its way also into the practical, moral, and political dimensions of life.  Without widespread social acceptance of either common principles for knowledge or shared sentiments—say, those stemming from religious belief, patriotism, or adherence to the dictates of an authority—new struggles appear concerning law and power.  Thus arise new movements of “social justice activism” or “applied postmodernism”, frequently witnessed on or stemming from college campuses throughout the late 20th century and into the early 21st—which follows insofar as these movements spring from a “theoretical postmodernism”.  In recent decades, the application of postmodern thinking (without often going explicitly under this name) has reached a somewhat more national consciousness, infiltrating politics, with not only the once-fringe Bernie Sanders shuffling into the limelight, but also the influx of young, female politicians with contrarian ideas, such as Ilhan Omar, Rashida Tlaib, and Alexandra Ocasio-Cortez.  The lattermost, for instance, gave an interview to Anderson Cooper on the television program 60 Minutes where she stated that “there’s a lot of people more concerned about being precisely, factually, and semantically correct than about being morally right.”  Though she walked it back, that sort of statement is not simply a mistake, a slip of the tongue, but rather exhibits a structure of thinking.

In other words, there is a strong opposition within the contemporary movement to what historically has been called “reason”.  At the theoretical level, this opposition often utilizes the very language of power-struggle arising in the wake of reason’s dismissal to vilify “reason” as an instrument of oppression—a tool employed by the hegemonic forces that are responsible for seemingly all human pain or suffering.  To the contrary, emotions and lived experience are promoted as the new and perhaps the only means to a just world.

Theoretical Roots of the “Postmodern”

At any rate, the theoretical or philosophical roots will often be linked to names like—in no particular order—Karl Marx and Friedrich Engels, Jacques Lacan, Max Horkheimer, Jean-Paul Sartre, Albert Camus, Jacques Derrida, Jean Baudrillard, Gilles Deleuze and Félix Guattari, Herbert Marcuse, Theodor Adorno, Louis Althusser, Michel Foucault, perhaps Slavoj Zizek, and the ethereal movement which may not be precisely identified with Karl Marx’s thought, but which not-illegitimately claims inspiration from him—the movement which receives even if it repudiates the name of cultural Marxism, exemplified in the recent “woke” movement exemplified by public figures such as Ibram X. Kendi; or in the closely-related field of “semantic activism”, that is, the effort to shift the meanings of words to produce desired cultural outcomes, advocated by academics such as Kate Manne.  Broadly speaking, the so-called theoretical postmodern is enmeshed with relativism, a “post-truth” mentality, and the radical resolution of the meaning of any apparent phenomena to naught but the individuated psyche, which rather ironically leaves power—understood not as the necessity of force but the persuasion to willing conformity—as the only means to social change.

Or, to sum the core belief up in a single sentence: “At the heart of liberty is the right to define one’s own concept of existence, of meaning, of the universe, and of the mystery of human life”.

The words are not mine, of course, but those of Supreme Court Justice Anthony Kennedy, written in the decision of a 1992 case, Planned Parenthood of Southwest Pennsylvania v. Casey, which “reaffirmed the essential holding” of Roe v. Wade.

Now, regardless of your stance on abortion—an issue I believe factually straightforward but psychologically complex and thus will not venture into here—this assertion of the Supreme Court, if you believe that reason has a central role to play in human life and the pursuit of human good, should deeply and profoundly disturb you, and raise questions about the concept of liberty: central not simply to so-called postmodernism, but perhaps even more prominently to the modernism which preceded it.

The Meanings of Modernity and Modernism

Understanding “modernism”, however, is not simple.  For one, the term may be applied in two different but related senses: first, referring broadly to a cultural transition from centralized monarchical political authorities to individualistic democratic or republican governments as well as a secularization of education, both in means and in content; and second, referring narrowly to the philosophical movements which prompted much of this cultural change.  The cultural transition was exemplified intellectually by Galileo, Newton, and Darwin—as the rise of secular science, outside the Catholic university—not to mention the Enlightenment generally, and politically by the American and French revolutions and the ensuing diminishment of monarchy across all the Western world.  Less lauded or even recognized by the proponents of modernity (except as a sign of modernity’s achievements, rather than a cause), but just as centrally necessary to its achievements and philosophy, was the rise of technology and especially industrial technology.

This is not to say that every fruit sprung from the tree of cultural modernism is poison.  For instance, through the shift away from authority—though a shift taken entirely too far—a better image of the means to individual flourishing as intellect-possessing animals (or semiotic animals), emerged.  Yet this fruit is the exception, rather than the norm.  By contrast, the loss of the true good of authority—namely the connection of an individual with a normative truth higher than any individual—and instead the false belief in authority as a kind of necessary power to enforcing a social contract is poisonous indeed; as are the slide into scientism, the fragmentation of knowledge, and the rejection of tradition on the mere basis of its being traditional.

Philosophical Modernity

Most important for addressing modernism, however, is to understand the philosophical roots.  Here, we can quickly get at the essence: for modern philosophy has two founders that stand above the rest, namely René Descartes (1596–1650) and John Locke (1632–1704).[2]  Descartes is best known as the first of the modern rationalists, holding that ideas are not derived from the objects of sense experience, such that sense experience at most gives us the occasion to form or discover an idea, while other ideas are given directly by God or are instilled in us innately (which might as well be the same thing).  Contrariwise, John Locke held the mind as a blank slate, and thought all our ideas were formed from the empirical experience of objects, built up in an almost atomistic fashion, such that having experience of one sensation after another we came to form generalized notions of the things experienced.  The decades and centuries following Descartes saw theorists of both rationalist and empirical thinking—such as Gottfried Wilhelm Leibniz, George Berkeley, Baruch Spinoza, David Hume, and Immanuel Kant, among others too numerous to name—arguing back and forth over the starting points and nature of knowledge… all of whom seemed entirely unaware that both sides partook of a fundamental and egregiously mistaken presupposition: namely, the belief that our ideas are themselves the direct terminal objects of our cognitive actions; in other words, the belief that we know our ideas and from knowing our ideas know the world.  Though often the rationalists have received the name of “idealist”, in truth, the empiricists are just as fundamentally idealist as their opposition.

This presupposition, regardless of one’s theory of ideation—and which presupposition we may call following Leibniz and John Deely as the “Way of Ideas”—drives an incorporeal and imaginary wedge between the individual human being and everything else.  The more attempts are made to build a bridge over this gap, the deeper the wedge is driven.  For the wedge of idealism, once admitted into a theory of knowledge, sets the individual and his or her experience as knowing his or her ideas, over and against the world as something extended and material, not known directly but only through the mediation of one’s subjectively-constrained ideas.   Inevitably, therefore, it drives individuals deeper into individualism as they believe themselves to dwell in their own minds.  Thus the Way of Ideas ends up driving the wedge so deeply that it widens the initial gap into a vast chasm: a chasm between the world as known and the self as knower, between the physical and the cultural, and between the natural and the personal.

For the turn first introduced by Descartes is a turn inwards; a turn which makes thought essentially private—and there is a lot here to be said about the technology of the printing press and the paradigmatic shift between the scholastic modes of philosophy obsolesced by privately owned and read books, a lot that I am not going to say here in fact: only that the Cartesian subjective turn gives an intelligible articulation to a spreading psychological habit technologically exacerbated, making that habit both explicit and reinforced.  The result is a diminished belief in the truth of intellectual conception as an essentially public activity.  Instead, truth is seen as something realized privately and subsequently spread societally through convention and agreement.  The promise upon which this extension of private thinking into social convention depended was the supposed universality of the scientific method—and perhaps, a philosophy structured in the same manner.   Such was the proposal of Immanuel Kant in his Critique of Pure Reason.  Such was the spirit of the Enlightenment in general: Diderot’s encyclopedia, Voltaire’s histories and letters, Rousseau’s theories of social contract, and so on.  Everywhere, one saw attempts to guarantee a better future through the blending of empirical observation and the faith in scientific method to regulate those observations into a universal monolith of “objective” truth.

The result of these efforts, however, is not only a habit of thinking as private, but also a habit of denying the reality of our own experiences: for every experience we ever have, of anything whatsoever, in any discernible regard, always exceeds what is grasped in mere empiricism (understood as the discrete reception of sensory phenomena).  Do our experiences and our knowledge begin in sensation?  Absolutely and undoubtedly.  But does the sensory data or even the neurological activity explain either the having of experience or the attainment of knowledge?  No; and it does not even come close.

“Postmodernity” is Ultramodern

And this philosophical error is why modernism leads inevitably towards so-called postmodernism: not because modernism ebbs away, but because its own internal principles, carried towards their logical conclusions, lead inescapably to nonsensical, non-rational positions—to the very repudiation of reason itself.  Superficially this appears most ironic, and will—by all adherents of modernism—be rejected.  For modernism hails “reason”; but the reason it hails is one stripped of its vigor, for it is not a reason which discovers the truth concerning the fullness of reality outside the mind or independent of the psychological self.  Modernity’s “reason” supplants the search for a cognition-independent truth with an amalgamation of facts like so many grains of sand out of which it tries to build the truth; and now the remnants of ideological modernity wail when the so-called postmoderns—who, in truth, are really ultramoderns—come knocking down their granular edifice and to re-shape it as they see fit.

Allow me here a lengthy quote from an article of John Deely:[3]

Relativism and solipsism are not matters that follow upon or come after modernity: in philosophy they have proved to be its very heart and essence, present from the start, a looming presence which took some centuries fully to unveil itself.  Late modern philosophy, phenomenological no less than analytical, finally embraced fully what Descartes and Kant had shown from the start: the human being is cut off from nature, hardly a part of it; the human being becomes a cosmos unto itself, with no way to relate beyond itself, beyond the veil of phenomena forever hiding the other-than-human things which are other than our representations, whimsical or a-priori as the case may be.

Modern philosophy fathered and fostered the pretense that science must confront nature as an “objective observer”, or not at all.  But modern science found that not to be the situation at all.  Instead of confronting nature as an outside observer, science came to see itself rather in Heisenberg’s terms as an actor in an interplay between the human world within nature and the larger world of nature of which the human world forms a part.  It found itself to be “focused on the network of relationships between man and nature, and which we as human beings have simultaneously made the object of our thought and actions” (Heisenberg 1955: 9).

From the point of view of the sciences as co-heirs of modernity with philosophy, this paradigm shift seemed a kind of revolution, a veritable new beginning.  But from the point of view of semiotics this shift is something more than merely a new beginning: this shift is a going beyond the modern heritage.  In effect, the late modern philosophers clinging to their notion of the human world of culture as a whole unto itself, cut off from nature as if autonomous in its unfoldings, are anything but “postmodern”, notwithstanding the many who have tried so to style the embrasure of the relativism implicated in the epistemological view beginning and ending with the subject thinking.  If anything, the muddle of thinkers whose usage of “postmodern” Stjernfelt would like discussed contribute nothing that goes beyond modernity, but only reveal and revel in what modern epistemology logically leads to, what modern epistemology had entailed all along.  Ultramodern rather than postmodern, they are not the beginning of a revolution against modernity but the death throes of the revolution in philosophy that was modernity, Mr. Hyde to the Dr. Jekyll of modern science in its maturity.

What is called postmodernism is not really in any way post modernity.  A true postmodernism has only begun to claw its way through a series of unimaginable philosophical errors, which origins I will try to demonstrate over the next several videos.

True Postmodernity

If modernity follows the Way of Ideas, and the idealist epistemological quagmire of the moderns leads to its own demise in nonsensical irrational ultramodernity, a meaningful postmodernity must be one which follows a different path: namely, what Deely has named the Way of Signs

Thus, if there are two figures I would definitively name as proponents of a genuine postmodernity, they are Charles Sanders Peirce and that same John Deely.  I would add, as a figure responsible for truly breaking out of modernity (even if his break has been badly misunderstood and consequently misappropriated by many), Martin Heidegger.  Based upon a cursory, initial reading of some of his works, I suspect that the little-known Eugen Rosenstock-Huessy ought also to be included.  Neither of the latter two explicitly advocate for the Way of Signs; but both turn language back to things, rather than to ideas.

Such a turn—whether implicit or explicit—allows us to recover truth as normative: precisely what modernity discarded, even if it did not realize it.


[1] Not, mind you, that the disciplines of academe are irreconcilable in principle, but rather, as presently practiced.  The reconciliation could only be affected through discovery of, study in, and resolution to a common basis of knowledge.

[2] We can include also as founders of modernity Francis Bacon and Niccolò Machiavelli, but their contributions—though essential to understanding modernity’s full constitution—are more like necessary adjuncts than central pillars of its nature.

[3] 2006: “Let us not lose sight of the forest for the trees …” in Cybernetics & Human Knowing 13.3-4.

On the Meanings of “Object”, “Objective”, and “Objectivity”

The word “language” often suffers a confusion in use because of a partial equivocation in signification.  Sometimes, we use it to signify the species-specifically human capacity to express semantic depth pertaining to a being known as independent of our cognitive activity; in other words, we use the word “language” to indicate our ability for signifying things as they are in themselves and not merely as they considered by reference to our pragmatic considerations.  To disambiguate the partial equivocation, we can call this the “linguistic capacity”.  Other times, however, when we speak about “language”, we signify a specific system of signs used for carrying out this linguistic capacity.  We can call such systems “languages” or “specific languages”.

Growth of Symbols

Every specific language is composed of words, which are signifiers by convention.  That is, there is no necessary correlation between the sounds I make with my mouth or the letters I write on the page and the objects that these words constituted through sound or writing (or any other means) are intended to signify.  Thus, two distinct words can signify one and the same thing, as “dog” in English and “Hund” in German both signify the same species of animal.  But the sound “snargglish” might just as well signify that very same species—that of dogs—and by a kind of stipulation, I can say that that is what “snargglish” signifies.  If enough other people start using “snargglish” in this way, the signification changes from being by stipulation (what I have attempted to authoritatively impose) to being by custom (where no one needs to know that I have imposed it).  Customary significations tend to become stuck in the minds of those who use them; thus, if I started using the word “dog” to signify a pile of leaves, there would be both confusion and resistance, for this does not hold as a custom in the minds of others, even if it holds this way in my own.  Nevertheless, the meanings of words—the objects they are used to signify—do change, grow, become clearer, shift, gradually dim, or fall into obscurity, and so on and on, depending on how they are customarily used.

That said (and by saying it we have broached the topic of semiotics) while the signification ascribed to any particular word belongs to it by convention, the specific languages we use are languages at all—that is, they are instances of our linguistic capacity—insofar as the words constituting the language immediately and proximately signify the concepts of our minds.  While the words of the specific language are conventional, the significations belonging to the concepts are not.  A longstanding tendency to conflate words with concepts obscures this truth.  But the simple fact that we have multiple languages whereby words composed of different sounds, letters, or even entirely different writing systems nevertheless convey the same ideas shows that the concept and the word are not one and the same.

It is an important point which we cannot elaborate upon here (but which has been well-discussed many other places) that our concepts themselves, too, function as signs: that all thought is through signs.

Sometimes, therefore, the ways in which we as societies and cultures affect changes in our words as used allow us to better signify and explain the significations of our concepts.  “Symbols”, Charles Peirce said—and words are the preeminent kind of symbol—“grow”.[1]  The conventional words across many languages for “sign”, for instance, have grown considerably as symbols since the early use in ancient Greece (which, in Greek rather than English, was “semeion”, used initially to signify the symptoms of a medical condition).  This will be the topic of another post.  But we can likely think of many other words which have grown over the course of history: “community”, for one, or “truth”; “Catholic” or “Christian”, “American” or “Russian”, “education” or “rhetoric”, and so on and on; that is, a growth which is not necessarily an increase of the term’s comprehension (including more particulars under it, that is), but perhaps a deepening, strengthening, or clarification of its meaning.

“Objective” Meaning

Other times, however, the changes of a word’s usage result in a concept being signified poorly, or perhaps even no longer being signified at all, such that the concept experiences a societal atrophy.  Or other changes, stemming from a lack of careful philosophical reflection on how terms are used or a blending of languages, a mix-up in translation, a mix-up in intellectual traditions, might result in a confusion not only of their verbal signifiers but of their concepts, too.

A little of each kind of confusion has happened with the word “objective”.  Here, we have to note that “objective” has two other forms commonly used today: namely “object” and “objectivity”.  Both “object” and “objective”, have an equivocal use as well, for both are used at times to signify a goal or aim, as in describing a “mission objective” or in the sentence, “She has always been the object of his affections.”  This is closely related to the grammatical use, where we talk about direct and indirect objects of verbs.  In contemporary discourse generally, however, the terms object, objectivity, and objective all alike have a common signification of pertaining to reality as cognition-independent.  Thus, the term “object” is commonly used as a synonym for “thing”; “objectivity” is used to signify an absence of vested interest in the outcome of a situation; and “objective” is used to reference things as they are “factually”, “scientifically”, or independent of any “subjective” interpretation or opinion.

Many people can be observed striving to demonstrate their “objectivity” in disputed matters, just as they are seen jockeying to prove their claims as “objectively true”—mostly by some reliance upon a scientific method of experimentation and statistical verification.  When it is said that we are treating another human being as a “mere object”, this indicates a diminution of their status from “person” to a “thing for use”—which (mis)use constitutes another albeit closely-related issue, since there is a depreciated sense of the aforementioned equivocal meaning of “object” as pertaining to a goal or aim in such a use.

However: none of these words in their contemporary usages signifies the same concept that the word “object” originally signified; or as it was in Latin, in which specific language the word originated, “obiectum”.  This Latin word, “obiectum”, was composed from two parts: a preposition, ob– meaning “against”, and iactum, a perfect passive participial form of the verb “iacere”, meaning “to throw”.  Thus, the “obiectum” was “that which was thrown against”.  Thrown against what?  As understood by the Latin Scholastic philosophers, the obiectum was thrown against some power or faculty belonging to a subject; that is, to be an object, for philosophers such as Thomas Aquinas, Duns Scotus, John Poinsot—and many others—was something precisely and only insofar as it was in relation to some power, and most especially a cognitive power.  Noticeably, there is a remnant of this understanding in the equivocal meaning of the term “object” as pertaining to a goal or an aim.  But this is a very weak remnant compared to the full force of the original sense.  For being in relation to a power can occur in different ways which I will not go into here, for the sake of brevity, except to say that that potential relativity of objects-to-powers is far more complex than simply between an agent and its goal or aim.

In Latin antiquity, therefore, “subjective” and “objective” were not opposites as meaning “what belongs to the opinion of an individual mind” and “what is true regardless of what any individual person thinks”, respectively (and as commonly used today), but were a correlative pair: the obiectum was “thrown against” the cognitive faculties or powers of the subiectum, and it was by the faculties of the subject that the thing was an object at all.  That is to say, that everything having existence is a subject, but only subjects with psyches, with cognitive powers, can have objects, properly speaking.  Or to put it otherwise, ontologically speaking, everything is in itself subjective and becomes objective only by relation to a cognitive agent.

Lost Meaning

The shift of the words’ use to our contemporary meaning is, frankly, a little funny to think about: for now they are used to convey the precise opposite of what they originally were intended to signify.  But it is only a little funny, because this opposition constitutes not only an inversion of the terms, but, in fact a loss of the original meaning.  Moreover, the rise of the new meanings has had two profoundly negative consequences.

First, the idea of “objective” knowledge or of “objective truth” badly mangles the meaning of “truth”.  Truth—the revelation of what is and the subsequent adequation of thought and thing—unfolds through interpretive means.  That is, the “adequation” is not a simple 1-to-1 ratio of matching something in your mind to something in the world but requires effort; it requires us to inquire into what really is, since very often we are mistaken in taking a mere appear for an essential reality.  Our concepts, which are the means through which the adequation occurs, are not dropped into our heads as perfect, precise means, but must be worked out through various operations—and it is never the case that we get the full, unblemished, absolute truth about the objects those concepts signify.  Our concepts are never perfect signs.  They may be sufficient, adequate, and accurate; but never perfect.  Our intellects are so weak, as Thomas Aquinas says, that we can never perfectly know the essence of even a single fly.

Second, the original concept signified by obiectum, the intelligible “thing” precisely as it is in relation to a cognitive power, is not sufficiently signified by any other term or succinct phrase of the English language.  Indeed, even the word “thing” misnames what an obiectum is.  There occurs a certain parallel in the German word Gegenstand, but their language, too, has suffered a similar confusion.  And it is difficult to make known just how incredibly important the concept signified by obiectum is when the misuse has become stuck in the minds of the many.  That is: the objects of our thoughts are not always the same as the things themselves.  Our concepts may present to us objects which differ from the things they include, either by being more limited than those things (which is almost always the case in at least one regard) or by including in their signification to us certain aspects which are outside those things themselves (which also occurs almost always).  To give brief examples, I saw a picture the other day of the “dumpy tree frog”.  Both my concept—signifying the kind of creature, the essence of such frogs—and my percept or mental image—composed from particular experiences of such tree frogs—are extremely thin; I have one picture in mind, and almost no specific knowledge about the frog beyond what I know about all frogs, and even that isn’t very rich knowledge.  Thus the frog as an object of my cognitive action is much less than the frog as a thing.

On the other hand, in seeing any bulbous, dumpy-looking frog, because of the cultural exposure I have had in my life, I immediately think of the frog not just as an animal, but as one that sits on lily pads, hops around small ponds, perhaps retrieves a golden ball, and gets kissed by princesses—the first two being things that follow from its nature, but which are nevertheless relational, and the last two being fictional relations.  Since I know they are fictional, I’m not deceived, but a young child might be.  Regardless, they certainly signify something more than the frog itself.

Something very similar to this relational conceptualization happens, however, in most of our experiences.  Certainly, it happens in every experience of culturally-shaped socialization.  That is, every object we encounter which has something in it that does not belong to it strictly on account of its own nature is an object expanded beyond the boundaries of what is presented by the thing in itself: for instance, friends, lovers, teachers, judges, police officers, and so on.  There might be a basis for their being such objects—as some people make better friends than others because of how they are in themselves—but being such an object entails something more than that basis.  The mug on my desk is my mug—on my desk.  But neither desk nor mug has any property in it which makes it mine.  It receives this designation only by an objective relation: what we call extrinsic denominations, which may be more or less fitting, but which fittingness depends upon a myriad of factors irreducible to the mind-independent realities themselves.

Conclusion: The Need for Semiotics

In conclusion: it is important to distinguish between our “linguistic capacity” and our “languages” so as better to grasp the nature of concepts and the means of their signification.  Language never exists in a fixed reality—“rigid designators” being, as John Deely once wrote, “no more than an intellectual bureaucrat’s dream”—but always shifts and alters over time, through use.  The conventional nature of our languages and their symbols allows us to improve our signification—but also to lose our concepts.  Such lacunae can be destructive to understanding: not only in that we misinterpret the works of the historical past but in that we misunderstand the reality which we inhabit.  For instance, the very real presence and effect of extrinsic denominations cannot be coherently understood without a robust distinction between “mind-independent things” and “mind-dependent objectivities”.  Simultaneously, the notion of “objective truth” results in “truth” being misappropriated as something entirely impossible.

Deep and consistent reflection upon the function of our signs—not only in general but in the particular languages we use—proves necessary to ensuring our conceptual coherence and clarity.


[1] c.1895: CP.2.302.

On Education and Its Institutions

The contemporary controversy concerning education centers around the institutions tasked with providing it.  We ask ourselves what curricula should be implemented, what teaching methods are most effective, and how governmental agencies can assist in the growth of educational institutions—we debate the morality of teachers and their influence, the rights to speech and questioning, the difficulty of grading and assessment and so on and on.  All too rarely, especially as these disputes intensify, do we pause to question our presuppositions concerning the true nature and purpose of education itself.  Indeed: long is it overdue that we turn our gaze away from the institutional structure and instead towards the individual, the family, and especially the parents who themselves are not only the first teachers of their children, but who ought to teach them always—who ought to be models from which their children learn throughout life.

This is not to deny the necessity of educational institutions—not only as pragmatic necessities for parents who cannot afford to homeschool but also for higher learning of every kind.  Yet, though necessary, institutions will always be insufficient.  We cannot outsource or offload the responsibility for education to any institution or collection of institutions.  Institutions are lenses that help bring clarity and focus; but they are not the light.

Real Education

Education, as any experienced educator knows, consists in guiding rather than informing; in fostering the right questions rather than the correct answers.  Intellectual nourishment, however, requires a holistic approach.  Going to the gym five days a week will do relatively little for one’s health if all other hours of the day are marked by constant consumption of junk food and buttery baked goods.  So too, the best teaching in school cannot eradicate contrary examples given at home—nor, for that matter, should this be required.  For the student to see his parents’ leisure hours consumed whole by television or distractions encourages inheritance of the same infertile habit.  Every human being signifies to every other not only through words and actions, but by the virtues and vices cultivated in one’s person.  We not only think through signs; we are ourselves symbols, signifiers of the truths and goods in which we believe, shown through our actions.

Thus, we must reorient our perspective on education: the foundation—the first symbol by which its merit is conveyed to the child and spread throughout the culture—cannot be found in the institution but rather only within the household and particularly in parents aflame with their own love for wisdom and learning.  This love becomes a first spark in the lives of children—to be focused and brightened by the lenses of educational institutions.  But they can neither start nor maintain that fire.

Communal Lights

This love of learning and discovery passed from parent to child need not be of abstruse topics—neither metaphysics nor science, theological controversy nor philosophical dialectic—but can be rooted in the very life of the home: in the tradition of family, in the cultivation of land, in the play of language through story and invention.  Principally, this love must kindle the natural desire to know, that sits at the heart of every human being.  That parents may seek development of their own higher education, of course, serves all the better, for this demonstrates that learning not only satisfies curiosity or amusement, but that it requires discipline, and that this discipline earns the soul richer rewards. 

By showing this intellectual discipline to children—and, indeed, one’s whole community—the parent (or even the unmarried and childless adult) exposes the lie that education after childhood constitutes a mere hobby or pastime.  At the Lyceum Institute we aim to provide a digital community which supports this continued pursuit of learning—as, indeed, education always is enriched by being shared with others.  In fact, no education occurs alone; it is handed down by ourselves and by others and flourishes thereby, through books and records of findings and thought.  But a living engagement takes it further: brings it into the life possible only through conversation, through disputation, through real questioning. Community, structured by an institution, helps shape the lens through which the lights of learning shine brighter.

We would love for you to join us.

Medieval Semiotics

Though “semiotics” is a word coined only in the late 17th century—and used consistently and meaningfully beginning only in the late 19th—the study of signs and their actions goes back millennia. During those thousands of years, some of the most important contributions were made during the age often called “Medieval” (though it would be better termed “Latin”) and especially by the Scholastic thinkers. Listen to this two-part podcast as Brian Kemple joins Hunter Olson to discuss the key figures and ideas from this period.

And be sure to check out all the great interviews on the Dogs with Torches podcast!

On the “Culture War” and Formal Causality

The term kulturkampf, literally “culture struggle”, has long-since been translated into English as “culture war”.  I have no desire to participate in a “culture war”.  Indeed, as I will argue here, the very notion of the “culture war” is not only misguided but harmful.  But as someone living within a culture, however, I do believe it is inevitable that I and everyone else—willingly or not, consciously or not—everyone does participate in the struggle over culture.

Semantics of War and Struggle

Why this “quibbling over semantics”?  Before I get to the semantics themselves, I have to say that I have never accepted as legitimate the objection that one is quibbling over semantics.  Words are important.  They signify concepts, and concepts are that on the basis of which all human history (all that is truly human, that is) has unfolded.  If you do not believe words are important, there seems to be no reason for you to read this—or anything.  In fact, the objection of “quibbling over semantics” presumes a nominalist or at least idealist divorce between cognitive activity and things independent of cognitive activity; but pursuing that question would take us far off track.

Returning therefore to the semantics of “struggle” and “war”: I protest the latter term because it suggests an entirely inapt metaphor.  War, to be waged justly, must have a reasonable expectation of victory.  One adopts violent means out of necessity: the need, namely, to produce or restore an orderly way of life that allows human beings to pursue their natural and fitting goods.  War should be irregular.  And before anyone thinks about bringing it up, let me say that there is an entirely different way in which the concept of “spiritual warfare” or “spiritual combat” must be understood, which would take us into a discussion well outside the boundaries of what I am here to discuss today; but which, succinctly, may be presented by saying that there are conditions for decisive victory in matters of the spiritual soul of the human being.  Not so in matters of culture, which is, by nature, an intrinsically temporally-unfolding suprasubjective reality constituted through a pattern of relations which attains a new foundation in every human being who is born and reared within a society of other human beings.  Or to put this in other, simpler words, culture is an ever-present and ­ever-developing reality which can only exist through the exchanges human beings have with and towards one another.  It is never final, because we human beings, as existing on earth, are not final; we, by nature, are creatures that change both over the course of our individual lives and over the course of generations.  So long as humans have freedom of thought and will, culture may change.

What’s Wrong with the World?

Allow me an anecdote.  When I taught ethics at the Wentworth Institute of Technology, a secular school in Boston, Massachusetts, I started each semester by giving the students a notecard on which to write their names, email addresses, a hobby or interest, and—in a single sentence or less—something they believe to be wrong with the world today, with the promise that I’d give my own answer later in the semester.  Their answers ranged from the very thoughtful to the kind one might expect in a caricature of a beauty pageant.  Most were focused on what could be called systematic societal issues: poverty, inequality, abuse of power, ideologies, a lack of charity or honesty among people as a whole, and so on and so forth.  Throughout the semester, we read a variety of thinkers influential in ethics: David Hume, J.L. Mackie, John Stuart Mill, Immanuel Kant, John Rawls, Philippa Foot, Marilyn Frye, and so on.  Each, in some way, provides a “system” for ethics either as a whole or with regard to some specific problems: rules or sets of principles which, if followed, are promised to improve society.  They might be rather loose rules or principles, or rather strict ones—but all had in mind the same goal, despite the significant differences in their means.  Mind you, I was required to provide a survey course covering a broad range of thinkers and theories—ideally, I would have focused the course more intently on better thinkers, but the conditions of my employment were non-negotiable.  Regardless, being required to teach a wide range of theories and thinkers, I spent most of the semester showing how these proposed systems have intrinsic and unavoidable flaws, no matter how strictly observed; how they fail in other words, how they do not provide us a secure and ethical society, and how they may be overcome or abused.  Towards the end of the semester, we would read Aristotle’s Nicomachean Ethics—after the first book of which, I would read their answers as to what’s wrong with the world back to them.  They would remember my promise, and that it was my turn. 

“What’s wrong with the world?” I asked myself, out loud, before them.  “Me,” I would answer; “I am.”

You might recognize this answer from a legend about Chesterton—I freely admit that I lifted it.  But it is, I believe, a good teaching tool: yes, there are many systematic problems with our country, our world, our politics, and our culture.  I cannot control any of those problems.  I can try to change them, but I cannot control them; for all are dependent upon millions of wills not my own.  I am, by nature, in control only of myself and even that only to a limited degree (i.e., I cannot will myself to be something I naturally am not—as I cannot will myself to be a top-tier athlete—maybe a decent one, but genetically that has always been out of my grasp—nor can someone born a man will himself to become a female, and so on).  The circumstances into which we all are born are beyond our control.  What is in our control is our capacity for virtue, the decisions and choices that we as individuals make.  Naturally, this extends into those with whom we have close relations: our individuality is only relative, and we are ourselves constituted largely through the relations we have with others.  But the faculty of the will extends efficaciously only to the self.  We may influence others through a kind of formal causality—objective or specifying causality, to be precise, which is just what I was attempting to do with those students, showing them the truth through a careful, painful, difficult process, one class session, one reading, one assignment, one Socratic hour at a time—but we cannot control their wills.  We can only attempt to specify objects for their thinking, propose to them what we believe is true, and strive to show them—most especially through how we live ourselves—the truth of the good, and thus that it is desirable.

Struggle and Habit

It in this inability to control others and the difficulty of showing the truth in which the struggle over culture consists.  It is perennial; it occurs again and anew with each individual human being who grows up in this or any other society.  Believing that ever there could occur a society where the demonstration of what is true is not difficult, where the struggle for it does not recur on a daily basis, is a fantasy which obscures the truth of the matter.  There are no shortcuts: the effect of specifying formal causality does not and cannot occur on a cultural scale through the impositions of force.  It is a gradual process of developing habits and requires careful and constant attention.  I had relatively decent success, teaching my ethics course, in persuading students to think that Aristotle was a very good starting point, to recognize that claim as true, in other words: but only because they were small classes of no more than 22 students.  (I doubt the effects were lasting, unfortunately—a single isolated course with students exposed to little else of similar thinking.  But I may hope that their thinking has remained on the track set down by the course, given the intensity of our discussions.)  That is not to say a larger class could not have been likewise incipiently persuaded; but affecting such a first step towards persuasion among most of a large crowd would likely have been only superficial, a fleeting adherence born not of intellectual conviction but birthed merely through winning the moment—through presenting them a fictionalized, fantastic version of Aristotle: the bold, counter-cultural Stagirite who stands athwart modernity, etc., etc.

In the age of mass media, and especially the internet, where any message has the potential to reach masses of people, such reductive approaches possess a seductive allure—especially if we conceive of the cultural struggle as being a war.  We see this video, or that trend, or this or that celebrity spreading a false message; we see their YouTube hit counters ticking over into hundreds of thousands, millions, hundreds of millions of views; this odious Tweet (Twixt?) garnering countless likes and retweets, that Facebook post being shared over and over again; misinformation being spread far and wide; and we feel that we must combat these numbers with our own.  Alarms blare in our mind and we hear the shouts of: “They are beating us!  They are winning!  We are losing!”  They are gashing us; so we must, we think, respond in kind.  We fashion exaggerated narratives, pseudo-historical accounts—we put on airs of gnosticism, of being the elect, being “those who know”. 

Pyrrhic Wars of Formal Causality

But the battlefield of those who wage war on the truth is fantasy.  To engage them in combat is to step on to that battlefield; to have to use their weapons, weapons which rely upon a kind of seduction into a way of living rather than understanding the truth about the good—weapons which aim at the lower rather than higher faculties of the human being.  This would be to abuse the influence of objective causality. 

I do not mean to suggest that fiction and fantasy cannot be put to good use.  They can be powerful means for telling stories which elucidate truths better than can be done by any philosophy.  But with the degradation of good philosophical thinking the fantastic loses its proper context of significance.  For a right formation of the moral imagination there must also be the claritas of good intellectual judgment: not only that there may be produced good works of creative fiction but that their interpretation might be guided correctly.  To gain these two goods of intellectual correctness and imaginative rectitude proves not a matter of battle, but of struggle.  It is lived by each of us individually and realized culturally in our being with one another.  Approached as a war, you may “win” a battle here or there—changing a school curriculum, passing a law, discrediting a movie or television show or speaker—but fought as battles, they are inevitable pyrrhic, costing us more than they gain.

An older version of this is available in audio form here: