The Death and Evolution of Education – Part IV: Evolution of Higher Education

Peripatetic Periodical

This is the fourth in a four-part series on the Death and Evolution of Education, which seeks to explain why we cannot rely upon the university to provide the intellectual formation necessary for the common good, but must “evolve” a new approach to learning. Part I: Introduction can be found here, Part II: The Hostile Environment here, and Part III: Maladapted Universities here. In this, Part IV: Evolution of Higher Education we outline what is needed to bring higher education into a condition where it may today thrive.

In evolutionary biology, it is theorized that speciation happens in one of two ways.  First, there is anagenesis, in which the accumulation of genetic alterations, over time, results in a development of an organism sufficiently distinct from the original form to qualify as a new species.  In anagenetic evolution, there may be earlier and later stages, but fundamentally the lineage remains the same.  Second, there is cladogenesis, in which differing responses to the environment (or differing environments) result in a split of the singular species into two or more, with better and worse adaptations to the environment.

Whether the university—being a socially-constituted reality and therefore not beholden to a genetically-determined base of its existence—will prove capable of anagenetic adaptation suitable for survival remains to be seen.  I could be wrong.  It may be that the university adapts not only for survival, but even to once again thrive.  As detailed in the previous entry in this series, however, the current structure fundamentally resists adaptation to the current environment.  Affecting the adaptation required today does not require merely the restoration of some previous paradigm.  Only by a radical restructuring can the university survive.

But even if the university does manage to restructure, this will not crowd out the development of new branches for higher education.  Indeed: the unchallenged hegemony of the modern university structure has always been something of an artificial anomaly—a gated community where wealth and reputation allowed the appearance of intellectual superiority.  But can learning of such a nature that its only legitimate forms of life ought to be confined within so narrowly-circumscribed garden path?  The growth of thought, perhaps, should not be left entirely untamed and free to sprout wildly wherever and however it might; but neither should it be restricted to rigidly manicured hedgerows.

Thinking, instead—now as always, but now especially—needs a genuine freedom.  Most institutions in our late-modern decline that proclaim themselves defenders of the “freedom of thought” do not, in fact, know what such a freedom is; for they hardly conceive of thought, but regard it only from the dim vantage point of having exercised themselves at some measure of thinking, and thus cannot really conceive what it means for thought to be free.  That is: thought, like any living thing, does not grow freely if it has no source of nourishment, no ground in which to root.  Thought can be free only if it has principles that allow it to grow freely.  Thinking does not come into being ex nihilo and the mere absence of restraint is no better than its excess.  Any genuine freedom of thinking must be freedom for thinking to be what thinking is.  Thus, we must understand what it means for a human being to think and implement the practices by which thinking is done well.

Because our educational institutions do not have natures in the same way as do plants or animals, they cannot “evolve” themselves; rather, they must be shaped and re-shaped by human beings.  The mistaken structuring of our current institutions, it seems to me, are the result of these institutions being shaped in an inhuman way.  Put otherwise, they have neither implemented the practices by which thinking is done well nor understood what it means to think.  This failure does not mean the universities (and other institutions) have failed in all ways, at all things; it is very difficult to be completely wrong, after all.  But it does mean that we need to branch off in a different direction if we wish to “evolve” our institutions in a way suitable to genuine education, especially in the environment of the twenty-first century.

1. Unseen Barriers

In the first half of the twentieth century, two rather different philosophically-minded thinkers—Étienne Gilson and Martin Heidegger—shared a certain insight in common: namely, that the Western world had largely lapsed into philosophical idealism as the default position.  This position, that we know ideas directly and the “real world” only as pictured in these ideas, still occupies a privileged place in the minds of many today, even if only unconsciously.[1]  Both men saw in this default presupposition of the idealistic position a problem: namely, that, because it was the default, idealism established the terms of the debate—and thus, any argument carried out within this structure naturally favors idealism as the only answer to the question of “how” and “what” we know.  Put otherwise, if we take for granted that the idea “in our minds” is what we know, we can never get outside of idealism—regardless of how we argue for that idea being formed in the first place.

The situation today concerning education exhibits an analogous presupposition.  Education’s terms have been established by credentialism; the sign of “being educated” is one’s collection of credentials.  A “highly-educated woman” is one who has many degrees, certificates, accomplishments achieved within the academy.  “He didn’t finish his education”, we say of the one who withdraws from a degree.  Of course, neither idealism nor credentialism have agency; rather, they are themselves ideas that determine actions stemming from the purchase they hold in the minds that have accepted them.  Idealism does naught without idealists, and credentialism requires institutions of credentialing.  And so long as we continue to pursue credentials as an end of education, we are confined by the often-hidden barriers of the credentialing complex—and the roots of thought, as a consequence, are constrained to narrow channels limiting their ability to adapt and expand.

These unseen barriers exist not merely within the university, but come to hold in our own minds as well.  As the university declines, and studies in the humanities—especially graduate studies—find themselves being squeezed out, the professors and other defenders of credential education find themselves in a panic.  Who will safeguard knowledge, wisdom, the teaching of truth?  (Is this what our universities have been doing?)

Many people do not so much as know what the love of truth is, but confound it with a love of dogma, where this word is used for whatever statements there may be assent to which on their parts has been inculcated either before they could really apprehend their meaning,—even if they have any meaning, or at any rate in a way that could afford no reasonable assurance of their truth.  Such people wear an armor almost impenetrable by any correct notion of the love of truth, or even of truth itself, while the inculcated formulae themselves, whether catholic, or calvinistic, or spiritualistic, or “christian-scientistic”, or pseudo-pragmatistics, or “positivistic”, or homeopathic, or what not, are almost always of a nature to undermine and wreck the reason that is natural to man but which their victims have been taught to regard as not merely fallible but thoroughly evil.[2]

2. Recovering the Roots of Thinking

Nihil est in intellectu quin prius fuerit in sensu—nothing is in the intellect which was not first in the senses.  Reasoning, as an activity of the intellect, cannot but arise from sensation, or what we might call sense-perception.  Every thought we have has some origin in and relation back to the phenomena of sense.  This sensory basis of our cognition does not entail, however, that every thought must be of objects strictly contained in the sensible as such.  Rather, what we discover in sense-perception are realities signified by those sensory objects but not reducible to them.  I will return to this point of how the sensible signifies the non-sensible in a moment.

But first, let me note that—as discussed in a previous entry of this series—today’s perceptual environment finds itself blurred between objects real and unreal; for the contemporary perceptual environment increasingly finds itself mediated and determined by the use of digital technology.  Confusion about the nature of perception and its role in human cognition proves therefore particularly dangerous.  Such an environment leads easily into confusion not only of a perceptual but an intellectual nature as well—we are inhibited both from correctly discovering and from judging the intelligible realities of our world.  In a vicious circle, an especially pernicious contributor to the contemporary confusion stems from one such error of judgment about the intelligible reality of ourselves: namely, the belief that the human person is his or her brain—that the psychological identity of the individual is reducible to the structure and patterns of the neurological.  I have named this belief “neuroreductivism” for short.

Such neuroreductivism extends the aforementioned idealism noted by Gilson and Heidegger in a strangely materialist turn.  Under this belief, ideas are not regarded as immaterial or spiritual occurrences in a mind separate in principle from the body, but rather as diverse configurations of our neurologically-ordered gray matter.  Yet they are still constrained to being “inside” one’s “mind”, even as “mind” is reduced to that neurological configuration.  Accepting this belief as a principle for understanding the self has led many individuals to jettison responsibility for their own actions and behaviors.  “That’s just the way my brain is”, we hear offered as an exculpation for one or another fault or failure, intellectual and moral alike.  It becomes an excuse not to think; we simply accept ourselves as neurological machines, and therefore excuse ourselves from having to work out the meaning of any objects we experience.  Because we are thinking things, however, we do continue to think about the objects we perceive nonetheless.  Only, with these presuppositions, we think about them as objects of potential use or pleasure.  We do not think about what the objects are, in themselves—merely what they are to us.

But neither our perception nor our thinking occur within our brains, even though our brains are necessary for perception and thinking to occur.  Rather, those occurrences that do happen within the brain are signs of the things perceived or thought.  Even though the brain may signal that something unpleasant afflicts the foot, we do not say that pain is in the brain, and we do not start rubbing our heads to get rid of a stomachache.  We might have confusions in the signaling—such that we think something external causes a pain, or a pain is referred from one part of the body to another—but this is a certain malfunction within the body.

We need, in other words, a correct habituation of our perceptual faculties to think clearly.  This habituation of our faculties is not mechanically accomplished.  It exists inseparably from conceptual thought and conscious intention.  In other words, even though we may do a lot of thinking that occurs at the perceptual level—interpreting and navigating the phenomena that appear to our senses, that is—these operations do not occur in isolation from our higher conceptual thinking.  The two are entangled.  Our educational environments need to recognize and adapt to this entanglement.  We should not attempt to separate the two and treat them separately, in other words, but discover the fitting and natural harmony of intellect and sense-perception.

This harmony is not, however, symmetrical or of equal parts.  Though the perceptual dominates our lives quantitatively, the intellect is stronger qualitatively.  Perceptual cognition proves quite limited in its scope when contrasted with the conceptual.  The clearest sign of this contrast can be found in the most distinctive aspect of human—as opposed to non-human animal—communication: namely, that ours is structured through language, whereas no non-human animal has anything more than speech.  We cannot here fully demonstrate the significance of this difference: suffice it only to say, here, that human communication recognizes the difference between signs and signifieds in a way that no other animal accomplishes.  This difference brings our distinctive intellectual cognition to bear upon all that we do.

If we do not recognize the nature of this difference, nor understand with greater clarity what it is our language does through our use of it—or how its use structures our environments, especially the digital environment—invariably we misuse that language.  Subsequent to that linguistic misuse, we also misconstrue our environments, both in developing them and in interpreting them.  Thus, we find the digital environment in particular to favor today a continued confusion of our perceptual thinking.  We have gained a certain expertise in designing digital applications and ecosystems of attention, in which different patterns of attraction exercise a continual pull on their users.  This expertise in design plays upon a particular fragility of human thinking: an attraction to that which seems novel.  We therefore find ourselves in an environment of rapid change: continual bombardment with new news, new media, new iterations and variations as we scroll through social media timelines and content recommendations algorithmically-produced alike.  The speed at which this environment changes its appearances results in a shallowing of language; that is, language becomes swept up into functioning as an image (e.g., “based”, “cringe”, “liberal”, “conservative”—even countries and persons, as “Israel”, “Ukraine”, “Gaza”, “Trump”, “Putin”, etc.), losing any semantic depth.  This rapidity through present uses of the digital erode our semantic depth prove inimical not only to language, but also to the good formation of memorative habits, as the speed with which the new overtakes the old disrupts both the possibility for repetition and the kind of focus by which language can attach to durable meaning.

But—even the most confused perceptual environment is permeated by intelligibilities resonant with the essential nature of linguistic signification.  Thus, even where and when we cannot say what will prove to be fabricated or factual, even if we cannot distinguish the exaggerations or obfuscations of the screen from the reality behind it, we may nevertheless continue to think about what is being said, and how it is said.  That is, even the images properly speaking that dominate our environment today are possessed of a structure that can be linguistically regarded, understood, and questioned.

Thus, an education which brings this awareness of language, and of how to think through language about the environments in which we live, alone will prove capable of enduring this digital blur.

3. Breathing in the Air of Inquiry

Put otherwise, the digital may not distinguish between factual and fictitious representation; but language cannot help but distinguish between true and false, even in only the most implicit and vaguely-conscious judgments.  Unfortunately, this inevitability of judgment can itself be poorly habituated.  Thus, an unthinking attitude towards the true and the false results in their attachment to perceptual phenomena—often resulting in an ideological bent towards thinking; even, at extremes increasingly common, fanaticism.

This unthinking attitude germinates in the narrow corridors of today’s educational institutes.  Countless words have been written on how K-12 education has done much to stifle natural human inquisitiveness.  But so too, many universities today—themselves ordered in the narrow paths of credentialing for careerism—allow questioning only under the imposition of artificial constraints.  Too little and too few are the opportunities to breathe deeply in an environment of inquiry.  Such air alone brings vitality to the life of the mind, nourishing those roots set down through perception.  Today, the vast majority of the United States—if not the whole of the Western world—has never known its thought nourished by such clear, clean air.

But even those few who do experience such rarefied atmospheres experience them for too short a time, in a way of living quite disconnected from “normality”—that is, the academy.  If anyone wishes to continue breathing that air, he or she must become a member of the academy himself, perpetuating the disconnect.  But if, like most, they depart the academy after four years, they are then thrust back into the clouded worlds of perceptual confusion and thinking suffocates under the corrosive miasma.  Only by heroic effort can a pipeline to clear thinking be kept.

The point, of course, is not merely to grouse about the conditions, but to see in them the opportunities.  Human beings, if we can but free ourselves from the conceptual shackles by which we confine ourselves to “accepted” paths, have a remarkable ability for innovation. 

And today, despite the gross abuses rendered through it, digital technology does indeed provide an incredible opportunity to live in an intellectually-rich environment, wherein inquiry can be not a mere pastime of the wealthy or idle but a habit for daily living.  We have in our hands the speed at last to slow down—and think.  We need not do this alone, as mere consumers of content, but may discover and engage communities of like-minded inquirers, of those who want to order their lives not by ideological reaction or under the intoxication of ever-changing perceptual distractions, but by a coherent collation of the truth.  And we can accomplish this by bringing language back to the center of our education; not as a mere necessity of societal functioning, but as the very essence of what makes us human.

Put otherwise, the perceptual environment needs to be brought into the light of intelligence; we need to shape our habits of perception in accordance with the reality with which language resonates.

An epilogue to this series will follow in the coming weeks.


[1] Heidegger 1927: Sein und Zeit, 207 (in the English translation by Macquarrie and Robinson, 251): “As compared with realism, idealism, no matter how contrary and untenable it may be in its results, has an advantage in principle, provided that it does not misunderstand itself as ‘psychological’ idealism.  If idealism emphasizes that Being and Reality are only ‘in the consciousness’, this expresses an understanding of the fact that Being cannot be explained through entities.”  Gilson 1935: Le réalisme méthodique, in the English translation by Trower, 93: “Most people who say and think they are idealists would like, if they could, not to be, but believe that is impossible.  They are told they will never get outside their thought and that a something beyond thought is unthinkable.  If they listen to this objection and look for an answer to it, they are lost from the start, because all idealist objections to the realist position are formulated in idealist terms.  So it is hardly surprising that the idealist always wins.  His questions invariably imply an idealist solution to problems.”

[2] Peirce 1910: “The Art of Reasoning Elucidated” R678, in Logic of the Future, vol.1, 152.

No responses yet

Leave a Reply

Subscribe

Subscribe to News & Updates

Enter your email address to subscribe and receive notifications of new posts by email.

Join 3,845 other subscribers

Discover more from Lyceum Institute

Subscribe now to keep reading and get access to the full archive.

Continue reading