This is the second in a four-part series on the Death and Evolution of Education, which seeks to explain why we cannot rely upon the university to provide the intellectual formation necessary for the common good, but must “evolve” a new approach to learning. Part I: Introduction can be found here. In this, Part II: The Hostile Environment, we examine how the world is no longer structured to support a university education. Part III: Maladaptation of the University—exploring the university’s failure to adapt—will be posted 21 August 2025.
By the term “environment”, here, I do not mean merely the physical surroundings that we inhabit. A human being never inhabits a merely physical environment, that is—we shape the worlds in which we live by the ways in which we think about things and the manners in which we present objects to one another.[1] Nothing in the physical structure of my house makes it mine, that is, any more than something in my neighbor’s house makes it his; but I do not go sleep in his bed, and we would have quite a disagreement if he tried to take a shower in my bathroom. In other societies, in other places and in other times, such disagreements may not have been as pronounced or present; social cohesion amongst neighbors may have allowed a different “environment” or “world” to be discovered among similar physical surroundings. The rights of property structure both our understanding and operation in the environment, and they are widely accepted and practiced in our social interactions because they have a certain natural fittingness, even if they are not embedded in natural beings themselves. The degree to which someone may lay claim to objects as property, however, is not fixed and may change depending on the availability and abundance of the goods by which human beings survive and thrive.
Education, similar to property, has a natural fittingness to the human person. Just as we adopt certain laws and regulations (both stipulated by authority and customary in practice) concerning the rights of ownership, so too we develop institutions to provide education. But just as we might see environmental conditions demand changes to our practices of property ownership, so too we can see changes in the environment will require adaptations of our educational institutions. A man living in a sparsely populated, expansive country might rightfully possess 100,000 fruitful acres he leaves untouched; but in a time of land-scarcity and popular need, it seems unjust for him to hoard and waste this property. His intent may be a certain conservational magnanimity; but he continues that pursuit in defiance of the higher common good of actual human beings. The physical environment remains the same in this case, but the socially-constituted world requires a re-thinking of property.
So too it is today with respect to education. We still need institutions to provide it. But the socially-constituted world no longer allows its concentration in the university; and, indeed, the university no longer possesses the structural capacities to provide the education we need.
What has changed in our environment? Why is the university becoming obsolete? In a word: the internet. That is, our minds—immersed in the digital environment—are insufficiently served by the institutions of higher education as they presently exist. We will turn our attention to this insufficiency in the following section. But first, we need to explain how it is that the environment has changed and how it has affected a corresponding change in what the human mind needs for the sake of becoming educated. Put otherwise, we need to know the conditions first before we can understand why the university is today maladapted to them. And make no mistake: though it may appear “unreal” in itself to many, the effects of the internet are as real and efficacious on our minds as are the rights to property. Indeed, their obscurity or hiddenness from us makes their effects all the more difficult to escape.
We can trace the evolution of this technology heretofore—and its radical alteration of the environment—in five stages: 1) the groundwork stage from 1988 until 1993; 2) the commercial boom from 1994 until 2000; 3) the development of web 2.0, from 2001 until 2006; 4) the social internet, from 2006 until 2016; 5) and finally, the age of total digital integration, from 2017 until the present.
1. Early Stages of Digital Evolution
In the groundwork stage, the early hardware infrastructure was developed and implemented by the U.S. National Science Foundation, followed closely by the development of HTML and HTTP, and the opening to commercial use in 1991. Over the next two years, use of Telnet, FTP, and Usenet allowed a new form of communication, but in a manner where the infrastructure seemed merely like an accelerated postal service or fax machines, as yet limited by the hardware of computers and networks. Yet the increasing capacities of digitization heralded already what might quickly become. While the quantity of data transferred today perhaps seemed inconceivable in 1988, the technological principles are much the same.
During the commercial boom, web browsers (NetScape Navigator, Microsoft Explorer, etc.) entered the scene, commercial ISPs (such as AOL and EarthLink) expanded rapidly. This public connectivity was followed closely by the advent of websites such as Amazon, eBay, Yahoo, and Google. Direct public investment in the internet became an increasingly sought-after skill, as the web became not merely a place to exchange things for money, but information became itself a readily-exchanged commodity. Unrecognized at the time, this connectivity to the home or personal computer radically altered the boundaries of place and privacy. Not only did the store come into the home, but so too the workplace and—darker elements as well. Previously, one would have to risk a physical, public exposure in the purchase or consumption of pornography, for instance. But it did not take very long at all for the commercial internet to become saturated with such content. Limited regulations on all manner of digital enterprise saw everything imaginable proliferate, for some years, on the “world wide (or perhaps, wild wild) web”.
Subsequently, many of these ventures—over-specialized, too niche, too rapidly-established—collapsed in the dot-com bust. But the infrastructure they had expanded remained, and many websites dedicated to interpersonal networking began to be developed: Wikipedia, MySpace, LinkedIn, Facebook, YouTube, Twitter, and more, as broadband networks began rapidly expanding across the country. Blogging, forums, and online communities became an opportunity for those in even rather remote places to find interpersonal connections about even the most obscure of interests and hobbies. Simultaneously—and too little-discussed—this increased social dynamic exposed many individuals to sensorial objects and concepts they would unlikely have encountered otherwise, most especially of a pornographic or otherwise salacious nature. In the 1990s, that is, one had to go looking for such content. During the 2000s, it would be brought to you even when you were looking for something else.[2]
But beyond these shifts in the landscape of moral imagination, the advent of social media caused a certain democratization of theory to appear: that is, one could find beliefs being articulated, with varying degrees of polish and persuasion, which were not part of the “mainstream” opinion. In effect, social media gave every user his own broadcast channel. Just as one no longer had to go looking for pornography, so too one no longer had to go looking for conspiracy theories or alternative histories. While freedom of speech or expression was no novel concept, never before had the means to broadcast one’s thought been so readily available and easily mastered. This element of the environmental change in particular has had many deep effects on education—including, as one consequence that ought to be very clear with a moment’s reflection, that it undermined the authority of the university and, at the same time, confidence in the very notion of truth as “objective”.[3]
2. Later Digital Evolution
In 2007, Apple introduced the first iPhone: and the internet rapidly went from being something to which one connected to something connecting everyone. The magnitude of this element of environmental change and its consequences are difficult to overstate. By virtually attaching us all to one another at our fingertips, the social networking aspect of the internet became the predominant use of it—indeed, the social element has pervaded nearly everything digital. In the years following the advent of the iPhone, connectivity infrastructure increased exponentially: with not only the integration of Wi-Fi and mobile networks, but the move to cloud computing, real-time data mining, and the processing of input and output through increasingly complex algorithms. In 2013, when Edward Snowden exposed the extent of government surveillance, there was considerable outcry—but it seemed already then that the whole world had been swallowed in the digital maw. The continued consolidation of various platforms and services under the growing umbrellas of Facebook, Google, Amazon, and others, led to the continued increase of algorithmic mediation of content and communication. On the one hand, this insertion of a mediating filter was necessitated for the companies by the sheer volume of content produced, much of which simply becomes “noise” that users do not care to see; continued saturation of their platforms with noise would assuredly drive meaningful users away. However, this algorithmic filtration which attempted to prevent the “noise” from obscuring the “signal” reached an obvious point of tension in 2015. This requires a little explanation.
An algorithm must adjudicate which content to show or which to hide on measurements that are either simply quantitative (as, for instance, engagement metrics—likes, shares, comments, views, etc.) or somehow qualitative. Simply quantitative algorithms will either promote the most shocking, bombastic, and attention-grabbing narratives, or they will be manipulated by bad actors. Qualitative algorithms, however, can only operate according to developer-input, and principally by a means of quantitatively weighting different semantic patterns. For instance, under a simply qualitative algorithm, an extremely outlandish headline might be very successful: “Patrick Mahomes smokes crack with a baby”, or “Vladimir Putin caught kissing Benjamin Netanyahu”. Contrariwise, a semantically-weighted qualitative algorithm might demote articles that juxtaposed the phrases “Patrick Mahomes” and “smokes crack” or promote one that asserts “U.S. economy in trouble again”.
Prior to 2015, most algorithms operated at least primarily on quantitative metrics—filtering posts that contained certain words, perhaps, or linked to a few sites, but not systematically filtering everything. As the major news cycle proved less efficacious on the political opinions of many in the United States than did the content and commentary of social media and non-traditional outlets—amply demonstrated by the 2016 election—and thus opinions considered more “extreme” became more commonly accepted, social media sites shifted to more rigorously-qualitative algorithmic systems.
From 2017 until today, we have seen no regression of this algorithmic control; rather, we have entered an age of total digital immersion. The deliberate spread of misinformation from not only fringe sources but also the mainstream, increased tribalization and competitive narration, and the willingness to deceive or misinterpret factual demonstrations for the sake of these opposed narratives led to many adopting a so-called “post-truth” attitude. This suspicion of truth, or at least of our ability ever to know it, intensified with the COVID lockdowns, during which the increased dependence upon the digital environment pushed us farther away from events themselves while pulling our attention ever-deeper into the diverse myths of their presentations. The rise of generative LLM technology (so-called “artificial intelligence”—better named “intelligence simulation”) rapidly increases the creative architecture through which divergent and even opposed narratives can be presented. Simultaneously, the advent of 5G wireless communications technology has accelerated the “Internet of Things”, by which more and more devices are connected to the digital grid. The on-going development of 6G aims at a world of total connectivity even for devices that do not have a chip. In other words, the broadcast and interpretation of 6G signals aims at functioning like a highly-sophisticated universal radar.[4] LLM technology, integrated into such a framework, would allow for the connective web established by the smartphone to extend even further—and to immerse even more deeply even those who aim at simpler, less connected modalities of living.
3. Hostility to the Real
Thus, the conflict of powers and ideologies today concerns less and less the “content” of the narratives communicated. Instead, the great world powers—countries and corporations alike—struggle for control over the environment of communication itself. While this struggle has been undertaken ever since the development of the printing press, the reach of today’s communication technologies is far broader and accomplished much faster. Thus, through manipulation of algorithms, one foreign power might influence another’s election, rouse a coup, shift attitude on a policy, spread slanderous accusations hard to repudiate, obscure some important happening from the public eye, or simply build a sense of general outrage. The populace, wearied by these fragments, becomes—consciously or not—more and more adherent to ideology not as a mere interpretation of what ought to be, but as the filter through which reality itself is strained. Counter-movements such as minimalism, analog revivals, and radical decentralization (cryptocurrency, web3, urbit) remain inefficacious and ultimately dependent upon the same technological frameworks (even if partially independent by, e.g., unique protocols) and exist within the same environment.
Of course, the digital environment consists not only in the technologies constituting this largely-virtual world, but so too the ways of thinking and behaving of the persons within it. In the first two or even three stages of the internet’s evolution, the patterns of thought were likely not yet changed in any fundamental ways. But just as the university changed when the vision of liberally-educated minds was displaced by knowledge-workers and effective contributors to a technocratic society, so too the internet underwent a change when users became “content creators”. The university student even of today can still find, although vanishing fast, programs in which his mind attains the liberal arts and is set on the path to philosophic wisdom. But even if he takes this rare path, he knows the other avenues are there—and that they are more lucrative. Someone may open social media and simply consume what others have produced; but he knows that he, too, can create videos and write blog posts; that virality and fame (who knows how much or for how long?) are just a few clever words away.
This consciousness signifies a deeper psychological change with respect to the environment we inhabit. Put as simply as possible: we inhabit an environment signifying seemingly infinite possibility in which the distinctions between real and unreal are perceptually obscured to persistent intellectual confusion. To take the example of the “social media influencer” or “content creator”: most of us know that a lot of what we see distorts or outright deceives as to the reality behind the screen. But the perceptual illusion remains persuasive nonetheless. And it is not merely a matter of this or that individual’s curated illusory presentation of some particular life or object. Rather—and this is essential for us to realize—digital technology’s capacities of storage and representation make no discrimination between real and unreal. Consider a few simple demonstrations: filters on photos, music editing software, and the editing of websites. Any photo taken with a modern smartphone can be easily edited, changing contrast, brightness, sharpness, tint; removing blemishes, re-shaping lines, and so on—oftentimes within the native camera apps themselves, to say nothing of more sophisticated programs. The translation of light into pixels occurs by fragmenting the real continuum of difference into homogeneously stored and manipulable data. Similarly, any digitally-encoded music can be “cleaned up” with various editing tools, improving pitch, loudness, even adjusting the timing; a band that might sound terrible live sounds incredible, streamed over the internet. And any website—Wikipedia for a prime example—can be edited ad infinitum. The text that appears on such a page can be altered within seconds, whether changing a single word or rewriting a whole novel.
Again, it must be stressed that inhabiting such an environment not only imperils the particular factual history of human activity but alters our habits of thinking—even when we are not on our devices, and even if we as specific individuals spend relatively little time immersed in digitality, for the habits become societally disseminated. Thus, invariably, the perceptual environment increasingly appears as constituted by ephemeral objects and we collectively lose our sense of intelligible object permanence. And so it is that, although we may retain an abstract belief in “truth” and “good”, the ever-shifting boundaries and subsequent blurring between real and unreal exhibited in the digital environment strip away the images in which these ideas are embodied. “Truth” and “good” become habitually regarded as mere abstractions. These alterations of environment and mind alike render the university—and, indeed, our current understanding of education as a whole—no longer adequate for forming minds in truly human thinking.
[1] So too do non-human animals, but we will not get into that here!
[2] I can distinctly recall, for instance, encountering highly sexualized images attached to links for pornographic websites on gaming forums.
[3] In itself, the phrase “objective truth” is a gross abuse of language—but I digress!
[4] See the white-paper from Ericsson linked here or the more accessible news story from telecom.com.

