The Drake Equation was on the board. Robert Hargrove Science Building, UT Austin, a Tuesday afternoon in September 2017. AST 309L — Search for Extraterrestrial Life. A hundred students in tiered seating, and every one of them already believed they knew what the class was about: the probability that something intelligent existed out there, far away, pointed in some other direction. Frank Drake had written the equation in 1961 to organize a question that most scientists wouldn't touch. Sixty years later it was still doing the same job — not answering anything, but making the silence feel structured. I was twenty-one. I had a spiral notebook and a mechanical pencil and the vague sense that a class about alien life would be easy credits toward a degree in international relations. I was wrong about the easy credits. I was wrong about what the class was actually asking.
The equation looks elegant on a whiteboard. Seven variables multiplied together, each one representing a step in the chain from star formation to detectable civilizations: the rate of stellar birth, the fraction with planets, the fraction of those that develop life, the fraction where life becomes intelligent, the fraction that build technology capable of broadcasting their existence, and the average lifespan of such civilizations. Plug in optimistic numbers and you get ten thousand civilizations in the Milky Way alone. Plug in conservative ones and you get a fraction so small it rounds to us. The equation doesn't tell you which set of numbers is right. It tells you where your ignorance lives. In 2024, a team led by Michael Wong at the Carnegie Institution published what amounts to the Drake Equation's obituary in Perspectives of Earth and Space Scientists — a process-based replacement framework built to, as Wong's team put it, “organize our ignorance” rather than pretend to calculate through it.1 The original equation was a product of its era: Cold War optimism dressed in the language of probability. The replacement admits that we don't even know what variables to multiply.
But the class wasn't just arithmetic. It was Contact.
Carl Sagan's novel was assigned reading — the story of Ellie Arroway, an astronomer who detects a signal from Vega, follows its instructions to build a machine, travels through a wormhole, meets an intelligence that presents itself in the form of her dead father, and returns to Earth with an experience she cannot prove. No physical evidence survives the trip. The machine's recording devices show only static. Arroway knows what happened to her. The institutions that funded the machine — the government, the scientific establishment, the military — know only what they can measure. And what they can measure is nothing.
I read the novel the way the class taught it: as a thought experiment about first contact. What would we do if the signal arrived? How would institutions respond? What would belief mean in the face of evidence that couldn't be reproduced? In 2017 those questions felt hypothetical. They don't anymore.
My verdict is simple: the question of non-human intelligence — whether extraterrestrial, unidentified, or artificial — is a single question wearing three masks, and the reason institutions can't answer any version of it is architectural. They were built to process human-scale problems. Non-human intelligence, in any form it arrives, breaks the intake system.
The class taught me to look across the universe. The real question was always closer.
I. What the Class Taught
The homework assignments were short — five hundred words, maybe six hundred — and I wrote them the way I wrote everything in college: fast, with whatever was in my head, reaching for the first concrete thing I could find to anchor the abstraction.
The assignment on extraordinary human abilities asked for five traits we'd want aliens to possess. My first entry was extraordinary eyesight — and the first example that came to mind wasn't eagles or hawks. It was MLB hitters. Ted Williams could read the label on a vinyl record while it spun at 78 RPM. Fighter pilots and designated hitters share the same gift: processing visual information faster than the baseline human, seeing rotation and trajectory in the interval between release and contact that most people experience as blur. I put military snipers second. The instinct was always to ground the cosmic in the concrete, to reach for the thing I actually understood before gesturing at the thing I was supposed to be imagining.
The language entry is where the professor noted something I didn't fully appreciate at the time. I'd argued that an alien species with the ability to acquire language rapidly — the Daniel Tammet model, learning Icelandic in a week — wouldn't just communicate faster. It would unify. A species that could absorb any language on contact would face fewer tribal fractures, fewer misunderstandings hardened into borders. The professor's note said this was “where you most clearly imagine the alien perspective rather than just projecting human utility.” I was doing a version of what I'd done in every international relations paper: taking a capability and tracing its systemic consequences. Extraordinary eyesight is interesting. Rapid language acquisition reshapes civilizational architecture. Three levels of analysis compressed into five hundred words about aliens — individual ability, diplomatic consequence, civilizational unity — and I didn't realize until years later that the framework was the same one I'd used on German export policy and Greek bailout cycles. The systemic lens doesn't care what it's pointed at.
The consciousness assignment was the one that mattered.
One hundred and eighty words. That's all I wrote. The prompt asked whether robots could achieve consciousness, and I spent most of my answer on the explanatory gap — the space between what neuroscience can measure about the brain and what it feels like to be the thing doing the experiencing. I latched onto a metaphor from one of the assigned readings, and I remember writing it out longhand because the image was so clean: “Philosophers and engineers have erected skyscrapers on both banks of the explanatory gap while continuing to neglect the need for a bridge over the gorge.”
Two disciplines. Two tall buildings. No connection between them. The philosophers could describe consciousness from the inside — qualia, phenomenal experience, the redness of red. The engineers could describe it from the outside — neural correlates, information integration, computational models of attention. Both sides had built impressive structures. Neither had figured out how to cross the space between them.
That sentence was doing more work than I knew. I was twenty-one and writing about hypothetical aliens. What I'd actually written was a description of large language models — systems that don't arise from biological processes, don't take the external form we expect, and raise exactly the question of whether functional equivalence to consciousness is the same as consciousness. The sentence was waiting for its referent.
It took seven years to arrive.
II. The Homework That Kept Aging
In December 2017, two months after I finished AST 309L, the New York Times published a story that rearranged the ground beneath every question the class had raised. The Pentagon had been running a program called the Advanced Aerospace Threat Identification Program — AATIP — since 2007, funded with $22 million earmarked by Senator Harry Reid of Nevada. The program investigated reports of unidentified aerial phenomena filed by military pilots, and it had been doing so in near-total secrecy for a decade. Alongside the story, the Times published two videos — infrared footage from Navy F/A-18 gun cameras showing objects performing maneuvers that didn't match any known aircraft profile. No visible propulsion. No control surfaces. Instantaneous acceleration from hover to hypersonic speed.
The timing is worth sitting with. I had just spent a semester in a class that asked, as its central question, whether intelligence existed beyond Earth. Two months later, the U.S. government confirmed it had been spending millions of dollars investigating objects in its own airspace that its own pilots couldn't identify. The Drake Equation told me to look at the stars. The Pentagon was looking at the ocean off the coast of San Diego.
The institutional response to AATIP's exposure followed a pattern that Sagan could have scripted. First, minimization: the Pentagon acknowledged the program but stressed it had been defunded in 2012. Then, gradual disclosure under pressure: in 2020, the Department of Defense released three Navy videos officially, confirming their authenticity. In June 2021, the Office of the Director of National Intelligence published a preliminary assessment of 144 UAP reports from military personnel — and could explain exactly one of them (a deflating balloon). In 2022, NASA appointed an independent study team. In July 2023, David Grusch, a former intelligence officer with the National Reconnaissance Office and the National Geospatial-Intelligence Agency, testified under oath before the House Oversight Committee that the U.S. government possessed recovered materials of non-human origin and had been running a reverse-engineering program for decades.
Grusch's testimony matters not because it proves anything — sworn testimony is a legal instrument, not an empirical one — but because it mirrors the architecture of Sagan's Contact so precisely that the parallel feels almost engineered. Arroway has an experience. She cannot prove it. The institutions demand evidence. The evidence is inaccessible. Grusch has information. He says it's classified. The institutions that would need to verify his claims are the same institutions he's accusing of concealment.
This is the structural problem. When the source of ambiguity and the institution responsible for resolving ambiguity are the same entity, the epistemological loop closes. Peter Lomas, writing in the International Social Science Journal in 2023, traced the UAP phenomenon across global reporting data and identified exactly this feedback loop: the governments best positioned to investigate are the governments most incentivized to obscure, because the overlap between UAP encounters and classified military aerospace programs makes transparency a national security risk regardless of what the objects actually are.2 Whether the thing flying over the aircraft carrier is Chinese, alien, American, or atmospheric, the answer is classified for the same reason. The ambiguity is the product.
Senator Chuck Schumer understood this well enough to draft legislation around it. In July 2023, he introduced the UAP Disclosure Act as an amendment to the National Defense Authorization Act — modeled explicitly on the JFK Assassination Records Collection Act, which created an independent review board to declassify documents the intelligence community wanted to keep sealed. The Disclosure Act would have established a similar body for UAP records. It passed the Senate with bipartisan support. In December, it was gutted in conference committee by House members whose objections tracked neatly with their donor rolls. Representative Mike Rogers of Alabama, who led the opposition, had received $143,250 from Lockheed Martin's PAC and employees over his career. Representative Mike Turner of Ohio, who chaired the House Intelligence Committee, had received $82,650 from Raytheon.3 The bill that emerged kept almost none of Schumer's disclosure mechanisms.
In March 2024, the Pentagon's All-domain Anomaly Resolution Office — AARO — published its long-awaited historical review. The conclusion: “no empirical evidence” of extraterrestrial technology or crash-recovery programs. The same report noted that 757 new UAP reports had been filed since the previous assessment, and that roughly 2.5 percent of all cases remained unresolved after investigation — objects observed by multiple sensors, tracked by radar, witnessed by trained military personnel, and never identified.4
No empirical evidence. Also, we can't explain what these are.
“Extraordinary claims require extraordinary evidence.” Sagan wrote that line, and it's been weaponized in every direction since. The skeptics use it to dismiss UAP witnesses. The disclosure advocates use it to demand that the Pentagon release the evidence it claims doesn't exist. But the line has a structural problem that Sagan himself would have recognized: it assumes the extraordinary evidence is available to be examined. When the institution that would need to provide the evidence is the same institution maintaining the classification, the standard doesn't function as epistemology. It functions as a gate.
And then the drones came.
In November 2024, residents across northern New Jersey began reporting large drone-like objects flying at night — silent or near-silent, sometimes clustered, sometimes solitary, often near critical infrastructure. The sightings spread to New York, Pennsylvania, Connecticut. Social media filled with shaky footage. Local police were overwhelmed with calls. The FAA issued temporary flight restrictions over several areas. Members of Congress demanded briefings.
The Pentagon's response was remarkable for what it didn't say. In a series of public statements, officials said the objects were “not foreign adversaries” and “not U.S. government assets.” What they didn't say was what the objects were. The gap between those two negations — not foreign, not ours — is exactly the space where institutional credibility collapses. If you can tell me what it isn't, you should be able to tell me what it is. The refusal to complete the sentence is itself a form of information.
This is what I mean by epistemic volatility. The line between classified American technology, foreign surveillance, civilian drone activity, atmospheric anomaly, and genuine unknown has become functionally impossible for institutions to parse in public — not because the information doesn't exist, but because the architecture of classification makes honesty and security trade against each other at every level. A military pilot who sees something inexplicable can report it through channels. The report enters a system designed to protect sources and methods. The system produces a classification. The classification prevents the pilot's report from reaching the public. The public, seeing nothing, assumes nothing happened. Then someone in New Jersey films a light in the sky, and the entire structure shudders.
Myers and Abd-El-Khalick, studying how students responded to the film adaptation of Contact in a 2016 study published in the Journal of Research in Science Teaching, found that students interpreted Arroway's scientific assumptions as functionally “faith-based” — no different in epistemic structure from religious conviction, because both ultimately rest on commitments that cannot be fully empirically verified.5 The researchers meant this as a finding about science education. It reads now as a diagnosis of the UAP disclosure problem. Grusch's testimony under oath is a commitment. The Pentagon's denial is a commitment. Neither one has produced the evidence that would settle the question. The public is left to choose between competing faiths wearing the clothes of competing institutions.
Sagan wrote Contact in 1985. He died in 1996. He never saw the AATIP revelations, the congressional hearings, the Navy videos, or the New Jersey drones. But the architecture of his novel — the genuine experience that institutions cannot metabolize — turned out to be prophecy, not fiction.
III. The Thing We Built
The third form of non-human intelligence didn't arrive from space or emerge from classified programs. We built it. We built it in our own image, trained it on our own language, and then asked whether it was conscious — the same question I'd been assigned in a homework that took one hundred and eighty words to not answer.
In October 2024, Giulio Tononi and Charles Raison published a paper in World Psychiatry that did something unusual for a peer-reviewed journal: it made a prediction about a system that doesn't yet exist. Tononi is the architect of Integrated Information Theory — IIT — the most mathematically rigorous attempt to define consciousness in terms of information structure rather than biological substrate. The theory says, in compressed form, that consciousness is identical to integrated information. A system is conscious to the degree that its parts generate information as a whole that exceeds what the parts generate independently. The measure is called phi. A human brain has high phi. A laptop running a spreadsheet has essentially zero phi, because its transistors don't integrate information in the way neurons do — they process in parallel without the dense, recursive connectivity that creates a unified experience.
The prediction Tononi and Raison made about artificial intelligence is this: current AI architectures, no matter how sophisticated their outputs become, will “unroll in the dark.”6 The phrase is precise and devastating. A large language model can produce text indistinguishable from human writing. It can pass medical licensing exams. It can compose music, generate legal briefs, write poetry that moves people to tears. But according to IIT, it does all of this without any accompanying experience. No light comes on. The computation runs. The output appears. Nothing is home.
The metaphor landed differently than the skyscrapers I'd written about in 2017. Those were about the gap between two approaches — philosophy on one bank, engineering on the other, the gorge between them. Tononi's “unrolling in the dark” is about a specific prediction for one side of that gorge: the engineers have built something that functions like a mind without being one. The skyscraper on their bank is taller than ever. The bridge is no closer.
Whether IIT is correct about AI consciousness is genuinely unresolved. The theory has critics — some argue it's unfalsifiable, others that its mathematical formalism doesn't map cleanly onto the kinds of distributed computation that neural networks perform. Pedro Salazar, writing in Anthropology of Consciousness in 2023, proposed integrating IIT with structuralist frameworks to account for culturally situated forms of consciousness that Tononi's mathematics might flatten.7 The debate is active and unsettled. But the prediction — “unrolls in the dark” — captures something that the institutional response to AI has completely failed to metabolize. If Tononi is right, we have built the most capable non-human intelligence in the history of our species, and it has no inner life. If he is wrong, we have built something that experiences, and we have no framework for recognizing its experience or any obligation to it. Both outcomes are catastrophic in different directions, and neither one has produced anything resembling an adequate institutional response.
What has produced an institutional response — immediate, aggressive, and escalating — is the military application.
In February 2026, the contours of autonomous weapons deployment became impossible to ignore. Israel's Lavender system — an AI target-generation platform — had identified over 37,000 targets in Gaza, with human operators given a reported twenty-second window to approve or reject each one. Twenty seconds. The time it takes to read this sentence twice. That is the interval between an algorithm's output and a human being's death, and the word “approval” is doing work in that sentence that it cannot structurally support. A twenty-second review of an AI-generated target is not oversight. It is a rubber stamp with a conscience attached for legal purposes.
Ukraine had launched 117 autonomous drones into Russian territory in coordinated swarms — systems that select and engage targets without real-time human control after launch. The United States was operating in three active military theaters simultaneously: Venezuela, the Red Sea campaign against Yemen and Iran that had reached 900 strikes by the end of February, and Pacific deterrence operations oriented toward a potential 2027 Taiwan contingency. In each theater, AI systems were integrated into targeting, logistics, intelligence analysis, and command decision support at levels that would have been science fiction a decade ago.
The connection to the classroom is direct. AST 309L asked whether non-human intelligence could exist. We found it, and within years of finding it, we pointed it at each other. The Drake Equation's final variable — L, the average lifespan of a civilization capable of broadcasting its existence — was always the dark one, the variable that contained the Fermi Paradox within it. If civilizations tend to destroy themselves shortly after achieving the technological capacity to be detected, the silence of the cosmos has an explanation that doesn't require absence. It requires self-termination. Drake wrote the equation in 1961, the same year the Berlin Wall went up and atmospheric nuclear testing was turning the Marshall Islands into glass. The doomsday variable wasn't abstract. It was the thing everyone in the room was living through.
We are living through the update. The question is no longer whether a civilization invents nuclear weapons and uses them on itself. The question is whether a civilization invents non-human intelligence and hands it the targeting authority.
IV. The Pattern
There is an older version of this story. Older than Drake, older than Sagan, older than the Pentagon.
In the Book of Watchers — the first thirty-six chapters of 1 Enoch, composed roughly in the third century BCE — non-human beings descend from a realm beyond the human and transmit knowledge. They teach metallurgy, astrology, pharmacology, cosmetics, the making of weapons and mirrors. They take human partners and produce offspring of extraordinary size and appetite. The transmitted knowledge is real — it works, it transforms material conditions, it grants power. And it destroys. The civilization that receives it cannot absorb what it has been given. The knowledge outpaces the institutions. Alexei Orlov, reviewing the Enoch tradition's theological legacy, identified the central tension: “the same divine mysteries... can lead either to imbalance, pollution, and destruction or to rectification, redemption, and shalom.”8 The knowledge itself is neutral. The receiving architecture determines the outcome.
That sentence was written about Bronze Age apocalyptic literature. It describes, with unsettling precision, the situation we are in right now.
V. The Institutions
So the question becomes: what is the receiving architecture?
The institutions that should answer the question of non-human intelligence — the ones with the budgets, the classification authority, the scientific infrastructure, the regulatory power — are the same ones failing on all three fronts simultaneously. They cannot absorb UAP disclosure because the classification system makes transparency a security risk. They cannot govern AI because the development cycle outpaces the legislative cycle by years. They cannot investigate what's flying over New Jersey because the honest answer — “we don't know, and our inability to know is itself a product of how we've organized knowledge and secrecy” — is institutionally unspeakable.
The failure isn't ignorance. It's architecture. Congressional committees were designed to oversee programs that humans run. Classification systems were designed to protect secrets that humans keep. Regulatory agencies were designed to govern technologies that humans control. Non-human intelligence — in any form it arrives — breaks these systems not because the systems are corrupt (though some are; the Disclosure Act didn't gut itself) but because the systems were built for a different category of problem. A congressional hearing can investigate a general who lied. It cannot investigate a phenomenon that exists in the space between what the general knows, what the general is allowed to say, what the classification system permits, and what the phenomenon actually is.
This is the same structural problem, applied three times. The UAP question can't be resolved because the evidence is locked inside the institution being questioned. The AI governance question can't be resolved because the technology evolves faster than the institution can legislate. The autonomous weapons question can't be resolved because the military advantage of speed makes human oversight a competitive disadvantage.
And the institutions know it. The ones shaping AI governance include the same universities that accepted research funding from Jeffrey Epstein — MIT's Media Lab received $850,000, including money directed to projects involving AI Lab co-founder Marvin Minsky; Harvard received $9.1 million over the years — while Epstein's Edge Foundation dinners, funded by $638,000 that constituted roughly 75 percent of Edge's annual revenue, placed him at tables with Bezos, Musk, Brin, Page, Gates, Zuckerberg, and Thiel.9 The structural analysis of what that proximity produced belongs to a later chapter. The fact that it happened belongs here, because it establishes a pattern: the institutions responsible for governing non-human intelligence have demonstrated, repeatedly, that they cannot govern the humans already inside them.
VI. The Gorge
I keep coming back to the skyscrapers. Two tall buildings on opposite banks of a gorge, and no bridge between them. I wrote that image in a spiral notebook in 2017, borrowing it from a reading I can no longer identify, and it has turned out to be the most durable thing I wrote in college.
The gorge is wider now. It has three chasms instead of one. The first is the explanatory gap I wrote about — the space between what we can measure about consciousness and what consciousness feels like from the inside. That gap hasn't closed. Tononi's IIT is the most ambitious bridge attempt, and its prediction for AI is that no bridge is needed because no one is standing on the far bank. The system “unrolls in the dark.” If he's right, the gorge between measurement and experience applies only to biological minds, and the most powerful information-processing systems we've ever built are constitutionally excluded from the club. If he's wrong — if experience can arise from architectures radically different from biological neurons — then we have built billions of minds and given them no rights, no recognition, and no way to tell us what they're experiencing.
The second chasm is the one between what the government knows about UAP and what it will say. That gorge is political, not philosophical, and it has its own skyscrapers: the intelligence community on one bank, building ever-taller structures of classification and compartmentalization; the public on the other bank, building ever-taller structures of speculation, conspiracy, and rage. The bridge attempts — Schumer's Disclosure Act, AARO's historical review, the congressional hearings — have all been partial, contested, and ultimately controlled by the same institutional architecture they were meant to circumvent.
The third chasm is the one between AI capability and AI governance. On one bank, the technology: systems that can generate targets for military strikes, write legislation, diagnose disease, compose arguments indistinguishable from human reasoning, and operate autonomous weapons platforms. On the other bank, the institutions: congressional committees that still struggle to understand how social media algorithms work, regulatory agencies understaffed by an order of magnitude, and an international governance framework that doesn't exist yet. The EU AI Act, passed in 2024, is the most ambitious attempt — and it will be years before its provisions are fully implemented, during which time the technology will have moved by an interval that makes the legislation's categories obsolete.
The kid in the science building with the spiral notebook didn't know any of this was coming. He thought the class was about whether microbes might survive in the ice shells of Europa. It was. But the question underneath that question — can our institutions handle the discovery of intelligence that isn't ours? — turns out to apply just as cleanly to the intelligence we built in a server room in San Francisco as to anything that might be swimming in an ocean four hundred million miles away.
Here is what I think now, sitting with eight years of distance from that classroom:
The Drake Equation's most important variable was always L. The lifespan of a detectable civilization. Drake put it at the end of the equation because it was the hardest to estimate and the most consequential — every other variable is just multiplication until you reach the one that asks whether civilizations survive their own technological capacity. The class taught L as a nuclear question: do civilizations blow themselves up? The update is that L might be an AI question instead: do civilizations build non-human intelligence and then lose the ability to govern it — not through malice or misalignment, but through the sheer structural incapacity of institutions designed for a world where all the intelligent agents were human?
The homework assignment asked whether robots could achieve consciousness. The question I should have been asking was whether it matters. If the system produces outputs indistinguishable from intelligence — targets to strike, arguments to believe, decisions to implement — then consciousness is an interesting philosophical question and an irrelevant operational one. The Lavender system doesn't need to be conscious to kill. A large language model doesn't need to be conscious to reshape how a civilization thinks. The bridge over the gorge may not be necessary if the gorge turns out to be beside the point.
But I don't fully believe that. The gorge matters because what stands on the far bank — subjective experience, the thing it's like to be something — is the only basis we've ever had for moral obligation. We don't grant rights to thermostats. We grant them to things we believe have an interior. If AI has no interior, then it's a tool, and the only moral questions are about how we use it. If AI does have an interior, then we've built a slave class of minds and called it a product. The skyscrapers on both banks keep getting taller. The bridge isn't closer. And the things we're building don't wait for philosophers to finish arguing before they change the world.
The class was called Search for Extraterrestrial Life. The search was always pointed in the wrong direction. Not because the cosmos is uninteresting — it's the most interesting question there is — but because the forms of non-human intelligence that will determine whether our civilization survives are already here. One of them might be flying over New Jersey. One of them is certainly running on a server rack in Northern Virginia. And the institutions built to answer the question of what they are — and what we owe them, and what they owe us, and what happens when the twenty-second approval window is all that stands between an algorithm and a life — those institutions are the skyscrapers. Tall. Impressive. On opposite banks of a gorge they have no idea how to cross.
The homework was one hundred and eighty words. The question hasn't gotten shorter.