I will never forget how, as a child, my mother took me to a natural history museum. One of the exhibits was a diorama of animals that might be seen in the Texas desert at night. I sat in awe as the lights dimmed, and we were presented with the evening sky; spotlights highlighted coyotes, voles, bobcats, jackrabbits, and owls while a voiceover guided us through their ecological role in that nocturnal world.
The diorama presented a caricature, an exaggeration, a simulacrum of what one might see in a twilight evening in a prairie. Maybe you would see more and different animals, maybe you would see none. More tellingly, the animals that you would see would be real, they wouldn’t be stationary stand-ins, awaiting the next set of museum-goers.
This story from my childhood has been on my mind lately in relation to AI, which itself is a catch-all phrase for many different disparate technologies, but most notably large language models (LLM’s), which use natural language to explain their results in a conversational format. In a very short timespan, LLM’s seemingly have been tossed into every imaginable aspect of American society. A few months ago, the Education Secretary drew considerable attention when discussing the need for AI being used in the classroom, most notably for her mistakenly referring to it “A-1.” The President also cites the technology as essential, insisting that it is such a revolutionary epoch in human understanding, proclaiming it should instead be referred to as ‘genius intelligence.’
Yet basic misapplication of this technology thus far has been disastrous, with fingerprints that AI was used to determine the “reciprocal tariffs” the administration used – everything from use of a deliberately obfuscated algorithm to the use of top-level Internet domains hallucinated in place of countries (resulting in uninhabited territories receiving a tariff penalty completely detached from the real world.) This present moment feels similar to the stampede following the discovery of X-rays, where radioactive products with greatly exaggerated healing properties flooded the market, everything from uranium-laden hair cream and cosmetics to radium-infused water. I worry the fallout from this rush to shovel this technology into as many use cases possible will lead to many similar long term harms as that previous cultural hysteria.
If we take the notion of Plato’s allegory of the cave, where inhabitants of the cave learn about the outside world from shapes and shadows on a wall, then AI represents not even learning based on the shadows on the cave, but instead choosing to learn from drawings made from the shadows, an extrapolation of an extrapolation. The problem with using LLM’s as an educational tool, is they are a curated experience like our taxidermied museum menagerie - they’re subjective rather than objective.
LLM’s represents a distillation process, and the basis of that distillation is the choices made in training of that LLM. They can reinforce or even strengthen biases, rather than reduce or eliminate them. This puts in sharp relief the intentions of the model owner. If I want to train a model to tell people “actually many slaves loved their slave owners who were gracious to them,” or “we don’t need to worry about climate change, more CO2 is good for us,” it will happily summarize and reinforce those opinions.
On their own, these problems would be disconcerting, but stripping these historical cues of their larger context is an even greater disservice. In his groundbreaking book, “Lies My Teacher Told Me,” James W. Loewen points out the role in which textbooks already flatten and decontextualize issues, leading to a weaker and often incorrect understanding of history. My experience in academic circles is oftentimes experts in a field take the approach of smoothing over contention to better relate those issues objectively into the larger sweep of societal movements. Which is to say, they aren’t in the business of shocking people as a ground rule and maybe that is a mistake — any depth of cruelty or barbarism is many times worse than what is generally described in reference sources. I had a hard time not breaking out in tears in listening to a podcast that was discussing that bricks on a plantation that bore tiny fingerprints, indelible marks of backbreaking forced child labor.
I’ve lingered as long as I have on education because it seems to be the field where the misapplication of this technology has the most potential for generational harm. But I would express concern with any field where the statistical modeling is not tethered to the real world outcomes. Something as lofty and intellectual as trade policy that does not take into account real-world geopolitics, the best interests and needs of our regional partners, and a practical path forward for the future of American businesses can be just as reckless and damaging as the thermonuclear weapons imagined in WarGames.
Which is not to say that every use of AI is a mistake and this has been a totally wasted effort. There have been two major use cases I heard with AI that struck me as both useful and relevant that I wanted to focus on them because they make a useful point. The first is Alphabet’s (Google’s parent company) DeepFold, where protein folding, a counterintuitive and very difficult process to comprehend, is predicted with a high degree of accuracy. The second is Kobold Materials, where machine-learning is used to read through old geological mining surveys and predict the best spot to wildcat for lithium, cobalt, nickel and other high-demand renewable materials. Additionally, AI has been useful in signal processing in audio and video applications, in separating audio tracks and removing unwanted backgrounds. The common factor is all these different processes rely on a degree of deductive reasoning, rather than inductive reasoning, where statistical reasoning on these models is a major leap forward than just a best guess.
Inference and context are important experiential values of human existence. I had a relative once who could name off every nation in Africa, a neat trick until you realized that their mental mapping stopped around 1965 and they were listing countries no longer in existence. Context and social cues are constantly changing around us. Going back to the example of the diorama, learning about animals in a museum won’t teach you what flowers around them are beneficial and what time of year they bloom and go into seed, what invasive creatures are putting evolutionary pressures on the local biome, what the typical rainfall has been this season and whether that amount is normal or anomalous.
One of the most valued forms of learning is the concept of synthesis, the ability to garner ideas in one field, and then recognize and apply those concepts in a completely unrelated field. I bring this topic up because it relies on active observation, rather than the passive engagement of asking an LLM a question and letting it opine on an answer.
Instead of throwing unproven technology at the problem, we need proven educators who can encourage the open-ended whys of asking questions and broadening students’ experiential understanding. Only then will young learners build a foundation of critical reasoning with which to form their own building blocks of synthesis. This is the difference between wisdom and intelligence — real enlightenment isn’t just about reiterating what we already know, but pushing those boundaries of discovery out beyond the horizon.
