A Philosophical Happy Hour on the cognitive threats we face in an increasingly interconnected and digital world—and the possible solutions or approaches to them (the “security”).
It is mid-2020 and you cannot shake the feeling that you are not getting the whole story. We are told that a lethal virus is raging across the global. Anecdotes are given in confirmation of its lethality. We are told that a man is murdered. Video shows us something that looks harsh, brutal. We are told that the rage of the virus is less important than protests over the murder—or not a threat, in this context. The incongruity is striking; but pointing it out too loudly, too ardently… this is not allowed.
What is a narrative? We think of it as a story. Persons, characters, take on roles and move events. Someone—in the case of world events unfolding in the present, the media—relates these events and their agents to us. But a narrative necessarily constrains its narrator to a selective presentation. No narrative can relate every actually relevant and agent. One must pick and choose. It is not only a story, but an interpretation. When we gain access to other facts or data—information presented that has been omitted from the narrative—we see that the narrative attempts to influence its listeners to adopt some belief. In other words, narratives are always aiming at a certain persuasion.
When we find ourselves inundated with not only facts omitted from official narratives, but inundated with alternative narratives—as we most certainly are in the digital age—we begin to question. Are the authorities lying to us, deceiving us? Or are their detractors? Are both using deception as a strategic weapon? Are we the resources being fought over? If we are being lied to now—is this really the first time that those responsible for telling us the truth have not?
Is someone else pulling the strings?
Cognitive Fragility
Today, dozens of companies aim—directly or indirectly—at the production of brain-to-computer interface (BCI) technologies. The most well-known, likely, is Elon Musk’s Neuralink. Paradromics, Precision Neuroscience, BrainGate, Neurable, Emotiv, NeuroSky, and Kernel are just a handful of the others—some working by surgically-implanted chips, others by wearable devices. These technologies have long been a central motif in science fiction, especially since the work of William Gibson (creator of the “cyberpunk” genre) and the Ghost in the Shell multimedia franchise. Less-intense variations of the idea have been toyed with since the early twentieth-century, however.
A common concern with these technologies is that, even as they might “enhance” or “augment” human capabilities, they would allow bad actors to hack others’ minds. As we have long and rightly placed our identities in the mind itself, the threat seems obvious—and serious enough that BCI devices have many objectors. We already see a world of intensifying “disinformation”, “narrative hijacking”, fragilities in our national and corporate security systems being exposed and manipulated. LLM technologies exacerbate these worries a thousandfold.
Put otherwise, we do not even need BCI devices to worry about being “hacked”. What the past five years have shown us—or, at least, brought into question—is our natural cognitive vulnerability. Why do we think the things we do? What do we believe what we believe? If governments can knowingly proclaim false narratives to their people, and media can be complicit in the deception—not just today, but in prior decades or centuries—what else might be false that we believe?
Security of the Mind
It is 2024 and everything seen through the screen has a tint of conspiracy. Late 2025, and you begin to tire of this. 2027, 2029—you are exhausted. Exhausted from fighting to figure out what is or isn’t true. Tired—morally, mentally, spiritually—of the permanent war of information; tired of efforts to conscript you into battling (with) disinformation. How do we leave a battlefield that is everywhere, all the time?
We have adopted an inhuman way of life—not just unconsciously but even eagerly. The internet was once something to which one connected; now it is something which connects everyone. We are enveloped by the digital. This envelopment exposes us to constant cognitive insecurity. But that we can be so-threatened follows only because we have a nature as cognitive beings—and a misunderstanding of that nature results in a misunderstanding of the threats.
Why are our minds vulnerable to falsehood? Is this simply a matter of nature—or is it one of “nurture”? Have we been “educated” in a way that makes us susceptible to false thinking or erroneous belief? Is it some combination of the two? Is it merely that we live now in a world saturated with media—news and entertainment alike?
How do we secure our minds? A number of thinktank-like ventures—NeuroLaunch, the Cognitive Security Institute, the Cognitive Security and Education Forum—are aiming to research and develop technological methods for identifying and preventing cognitive manipulation. (Others, found by an easy search engine query, are attempting immediately to profit by offering “AI” solutions to narrative and reputation protection.) Technological solutions have the apparent benefit for universality and efficiency (i.e., can be applied by anyone); they also promise a kind of adaptability (i.e., a relative ease of being “updated” to handle new threats).
Of course, they have also the vulnerability of themselves being hijacked by bad actors. Thus, parallel initiatives are proposed for training in “critical thinking”, as well as “media literacy”, “semantic literacy”, and “social engineering awareness”.
How Can We Defend Ourselves?
But are these fundamentally technical problems that can be fixed with technical solutions? Is our “cognitive insecurity” simply a matter of the conditions of the world today—or something deeper and more essentially human? Can we rely on DARPA, on the U.S. (or any other) government to have our best and most human interests at heart? Or do they protect us like they would any other conquered resource?
Join our conversation this Wednesday (13 August 2025, from 5:45-7:15+ pm ET) as we take up the question of cognitive insecurity—its causes, sources, means, solutions, difficulties, and more.
philosophical happy hour
« »
Come join us for drinks (adult or otherwise) and a meaningful conversation. Open to the public! Held every Wednesday from 5:45–7:15pm ET.



No responses yet