The Question Nobody Asked
In October 2025, a public health school at Harvard published an essay on judgment. Not on diagnostics, epidemiology, or AI in medicine — just on judgment. Its central observation was quiet but precise: discernment draws on two companions, intuition — the ability to sense patterns and implications before we can name them — and taste — the felt sense of what fits.
That a school of public health felt the need to publish an essay about taste tells you something. It means the people who study how humans make consequential decisions under uncertainty have noticed that something is shifting — that the ground beneath judgment is moving — and they wanted to name it before it moved further.
I ended my last post, When One More Prompt Costs Nothing, with a line I haven't been able to shake: a wealth of capability creates a poverty of judgment. This post is the argument underneath that line.
What Taste Actually Is
Taste is commonly mistaken for preference. You like Helvetica; I like Garamond. You prefer Coltrane; I prefer Miles. But preference is just inclination. Taste is something structurally different.
Pierre Bourdieu spent most of Distinction (1979) arguing that taste is socially constructed — that what we call good taste is really cultural capital, the marker of class position masquerading as aesthetic refinement. He was right about the sociology. But his critique also reveals taste's deeper function: it is a discriminating capacity, an ability to sort, rank, and choose among options that are, on the surface, equally valid.
Luke Drago, writing on the implications of AI for scientific research, defines taste as "an opinionated vision for what something should be, and the related judgment and conviction required to choose novel directions without obvious prior evidence of success." That last clause is the key. Taste operates precisely in the space where evidence is absent or ambiguous — where you cannot optimize your way to an answer because the answer space itself is undefined.
designative.info's February 2026 essay on design in the age of agents put it more precisely: taste is the judgment that operates when options are abundant — when many solutions are technically viable, data-backed, and defensible — and allows you to discriminate between them. When execution is no longer scarce, judgment is.
Steve Jobs was less philosophical but arrived at the same place. "Ultimately," he said in 1995, "it comes down to taste. It comes down to trying to expose yourself to the best things that humans have done and then try to bring those things into what you're doing." When he criticized Microsoft, the indictment was specifically about this: "They just have no taste. They have absolutely no taste." He didn't mean they lacked technical skill. He meant they couldn't tell the difference between what worked and what was right.
Where Taste Comes From
This is where the argument gets structural.
Taste is not knowledge. You cannot transmit it by explaining it. You develop it through what Harvard's essay calls "repeated, attentive looking" — through sustained exposure to variety and contrast, through failure and correction, through making and reviewing and making again. Debris Studio's essay "The Rise of Taste" describes the mechanism: "Don't simply label work as good or bad based on gut feeling, but rationalize why something resonates and move beyond surface-level appreciation to develop pattern recognition." The move from gut to articulated reasoning and back to refined gut — that loop, repeated hundreds of times across years — is how taste forms.
This matters because it means taste has an irreducibly embodied and temporal dimension. It requires time. It requires stakes. DuBose Cole, writing in The Sunday Strategy, describes taste as "judgment with style" — and notes that Rick Rubin, one of the most respected producers in popular music, describes his creative process as pulling intangibles together to edit, rearrange, and remix. Rubin has no formal musical training. What he has is forty years of attentive listening, across every genre, with an ear tuned by feedback. That is not something you can shortcut with data.
This is also why Bourdieu's sociology points toward something deeper than class critique. Cultural capital — the accumulated exposure to great work, the internalized norms of a field, the ability to recognize quality across contexts — is not just a marker of privilege. It is a real epistemological advantage. The person who has read a thousand novels will recognize what a mediocre novel is doing wrong faster than any model trained on those same thousand novels, because the reader has also lived through the moments the novels were trying to capture. The model has pattern-matched text. The reader has compared text to experience.
The Bottleneck Has Shifted
The evidence for this shift is accumulating.
Ahrefs analyzed 900,000 newly published English-language web pages in April 2025 and found that 74.2 percent contained AI-generated content. By some estimates, that number has continued climbing. The result is what researchers are calling a content flood: a volume of material so large that the problem is no longer production — it is selection.
This pattern has run before. When publishing became free — when anyone could start a blog, a podcast, a YouTube channel — the world did not produce better content. It produced vastly more content. The people who thrived were not the most prolific publishers. They were the ones with the best editorial judgment — who knew what was worth saying and, just as importantly, what was not worth saying. As I explored in Build vs. Buy Is Dead, the same dynamic is playing out in software: when generation is cheap, curation becomes the scarce resource.
Drago's research on AI in scientific environments found an analogous pattern. AI tools were excellent at automating idea-generation tasks but remained dependent on the top scientists' taste and judgment to identify which ideas were worth pursuing. The models could generate a hundred promising directions. Only the scientists with decades of domain experience could tell which direction was actually promising versus merely plausible. Idea supply was no longer the bottleneck. Evaluative judgment was.
The IMD's 2026 AI trends analysis is blunt about the executive version of this: organizations should stop asking "Which skills do we need for AI?" and start asking "What becomes our bottleneck once AI succeeds?" The answer, consistently, is judgment. Not capability. Not technical access. Not even domain knowledge in the traditional sense — because models increasingly have domain knowledge. What models lack is the discerning capacity to know which of a thousand valid options to commit to.
Why AI Cannot Develop Taste
This is the claim that needs the most careful defense, because it is the one most likely to be wrong within five years. So let me be precise about what I am asserting.
I am not asserting that AI cannot simulate taste. It already does. A well-prompted model can evaluate design choices against established principles, critique a piece of writing for structural flaws, or rank marketing copy by likely conversion. These are taste-like outputs.
But simulation is not the thing itself. The Web Designer Depot's February 2026 analysis put it sharply: "AI 'taste' is really just statistical comfort — the average of everyone's preferences flattened into one endless scroll of inoffensive beauty. Humans develop taste through rejection. We evolve by saying no. Machines can't say no. They can only predict more of what's already yes."
There is a deeper problem. Aesthetic judgment — taste in the fullest sense — requires stakes. The Harvard essay is direct on this point: these systems never tire, but they do not yet care in the way people do. Caring, it notes, remains a human responsibility. The art director who knows which of ten options captures the mood of a campaign knows this because she has seen campaigns fail. She has felt the gap between what she thought would work and what actually worked, and she has revised her model of the world in response. The revision was painful. That pain is information. Models are not updated by failure in this way. They are trained on outcomes, which is different. Training on outcomes produces a model of what has worked. Caring about outcomes produces judgment about what should work.
Jessica Hullman and Ari Holtzman's academic paper on AI and aesthetic judgment identifies the precise gap: AI systems can recognize patterns in what humans have judged to be good, but they cannot originate aesthetic value or anticipate cultural shifts before they happen. Taste in any living domain — art, music, software design, product strategy — requires the capacity to be wrong in ways that matter, and then to update. AI systems update on data distributions. They do not update on regret.
The vibe coding paradox explored this from a different angle: the people who are best at using AI coding tools are the ones who already understand what good code looks like. They can evaluate the output because they have the taste to know when the model is confidently wrong. Without that, you are not a programmer with a powerful assistant. You are a passenger.
The Counterargument Worth Taking Seriously
The objection goes like this: taste is not as mysterious as you are making it. All taste is pattern recognition. All pattern recognition can be modeled. As AI systems grow more capable, they will accumulate more patterns and develop something functionally indistinguishable from taste. Julie Zhuo, former VP of Design at Facebook, raised a version of this question in a 2025 essay: what happens when AI has better taste than you?
This is a serious argument. Let me engage with it directly.
First, the empirical evidence does not yet support it. Drago's research specifically found that AI enhances research productivity by automating idea-generation but remains reliant on human experts for discriminating between those ideas. The gap is not closing as fast as capability is growing. AI gets much better quickly on problems where right and wrong are objectively defined — whether code runs, whether a test passes, whether a fact is accurate. It improves more slowly on problems where quality is context-dependent, stakes-embedded, and culturally situated.
Second, even if AI eventually develops something like taste, the argument for human taste does not depend on AI being forever incapable. It depends on the transition period being long enough to matter — and on the nature of taste that makes it compound over time in ways that are hard to replicate from scratch. Taste built through decades of exposure to failure, success, critique, and cultural context is not just pattern recognition. It is a world model — a rich, embodied sense of what the world is like and what humans respond to — that takes years to develop and cannot be shortcut by training runs.
Third, there is a selection effect worth noting. If AI develops better taste in some domains, it will do so by training on existing human judgments. The human tastemakers who set the training distribution will still determine the aesthetic horizon of the models. The editorial judgment that decides what gets labeled "excellent" will remain human. AI taste, in this scenario, is downstream of human taste. It follows. It does not lead.
What This Means in Practice
I want to be careful not to make this abstract when it is not.
Taste, in practice, is what decides which of ten technically valid product features to ship. It is what determines whether the sentence is done or needs one more pass. It is the moment in a design review when someone says "this works but it's not right" and can explain why. It is the editorial judgment that says a particular essay has a good argument but is missing something the author is afraid to say.
These are not mysterious or rare capacities. They are learnable. Debris Studio's practical framework: study what the respected practitioners in your field have produced; rationalize why it resonates; practice your craft consistently. Taste grows through repeated exposure, critical attention, and feedback loops where you compare your judgment against the judgment of people you respect.
The warning is the flip side: taste atrophies. The vibe coding pattern — fully delegating the discriminating work to the model — is a taste atrophy pattern. Not because the delegation is wrong, but because the feedback loop breaks. You stop comparing output to your internal standard, because you have stopped developing an internal standard. You lose the capacity to know when the model is wrong because you have stopped practicing the judgment that would catch it.
The Greg Campion essay on the taste economy frames this as scarcity and value: when knowledge becomes cheap through AI, taste becomes the scarce resource that commands a premium. But scarcity is not just about supply and demand. Taste is scarce because it is hard to develop, slow to develop, and cannot be borrowed. You either have it or you are building it.
The Shape of the Moat
I keep coming back to a specific image from the content economy.
When blog publishing became free in the early 2000s, the prediction was that quality would rise as everyone could finally share their ideas. In narrow ways, it did. But the more lasting effect was that volume overwhelmed signal. The scarce resource was no longer writing — it was editorial judgment: knowing what was worth writing, worth reading, worth linking to. The bloggers who built durable audiences were the ones with the best taste in what they chose to cover and how they chose to frame it. The content itself was the output. The taste was the moat.
The same pattern is now running across every domain of knowledge work. Agentic AI can generate software, marketing copy, research proposals, financial models, design mockups. The question "can you build this?" has an increasingly uniform answer: yes, cheaply, quickly, well enough. The question that does not have a uniform answer is: "should you build this, and if so, how, for whom, and why?"
That second set of questions is taste. It does not live in any model. It lives in the accumulated experience of someone who has cared enough, for long enough, about a particular domain that they have developed the felt sense of what fits.
A wealth of capability creates a poverty of judgment. And the poverty of judgment — the absence of taste — shows up not as ignorance but as an inability to tell the difference between the ten options that all technically work. You get more. You get faster. And you cannot tell which one is right.
That inability is not a bug in the AI. It is, for now, a feature of being human that we have not yet fully appreciated.
References
- Harvard T.H. Chan School of Public Health. "Essay: Intuition and Taste in the Age of AI." October 28, 2025.
- Bourdieu, Pierre. Distinction: A Social Critique of the Judgement of Taste. Harvard University Press, 1984.
- Drago, Luke. "The Future of Taste." Substack.
- designative.info. "Taste Is the New Bottleneck: Design, Strategy, and Judgment in the Age of Agents and Vibe-Coding." February 1, 2026.
- Debris Studio. "The Rise of Taste: Why Human Curation Will Define the AI Era."
- Cole, DuBose. "The Increasing Power of Taste vs. AI." The Sunday Strategy, Substack. July 8, 2025.
- Campion, Greg. "The Taste Economy." Intentional Wisdom, Substack.
- Hullman, Jessica and Ari Holtzman. "Artificial Intelligence and Aesthetic Judgment." Northwestern University.
- Ahrefs. "74% of New Webpages Include AI Content (Study of 900k Pages)." 2025.
- Web Designer Depot. "AI as Art Director: Can Machines Develop Taste?" February 2026.
- IMD. "2026 AI Trends: What Leaders Need to Know to Stay Competitive."
- Zhuo, Julie. "When AI Has Better Taste Than You." Medium, June 2025.