Brands of loving grace, and humans who use AI for good

Michael Carter Founder & CEO, brand.ai

Brand has never mattered more. And it has never been managed worse.

Billion-dollar companies run one of their most valuable assets on 300-page PDFs no one opens. Guidelines ship quarterly while culture moves at meme speed. AI is flooding every channel with content while consumers grow allergic to anything that smells synthetic. The gap between what brands need and what their tools can do has never been wider.

Richard Brautigan once imagined a where machines and nature coexist in harmony. Dario Amodei borrowed that vision in describing a world where a century of progress collapses into a decade, a "country of geniuses in a datacenter."

I've had over a year to sit with that essay and watch the ideas play out. If "marginal returns to intelligence" are real, we're nearing a threshold where the gap between human and superhuman intelligence becomes a cliff. My question is simple: what happens when that intelligence turns its attention to the cultural work of building brands?

AI has the power to turn brand from a static artifact into a living system: versioned, enforceable, and adaptive. The brands that win will be the ones that preserve distinctiveness while moving at cultural speed.

What took Coca-Cola nearly 140 years to build might happen to new brands in five. Awareness can scale fast. But trust and meaning don't automatically scale with it. That's where the real work lives.

The Breaking Point

Jaguar's November 2024 rebrand shows what happens when brands can't keep up with culture. Months of work produced an identity that erased heritage in favor of generic minimalism. The launch was received as meme fodder. By April 2025, Jaguar . Forty-nine. Down from nearly 2,000 the year before. The agency relationship ended. Leadership churn followed.

The backstory matters: a leaked 2022 letter from internal designers had They felt the soul of the brand was being handed to people who didn't understand it. Two years later, the rebrand proved them right.

Jaguar isn't alone. In May 2024, Apple launched its iPad Pro "Crush" ad, showing a hydraulic press destroying instruments, books, and art supplies to reveal the tablet. The intended message: all of human creativity compressed into one device. The received message: "The destruction of the human experience. Courtesy of Silicon Valley," as Hugh Grant put it. Apple apologized within 48 hours and pulled the TV buy. Even the company that defined creative marketing for forty years could misread the room.

Then there's Arc'teryx. In September 2025, the outdoor brand staged a fireworks display at the base of the Himalayas to "honor Mother Nature." An outdoor brand built on environmental credibility, setting off explosions in one of Earth's most fragile ecosystems. Anta's stock dropped 2.4%, wiping out roughly $849 million in market value. The stunt is now under government investigation.

These aren't PR crises. They're symptoms of a deeper problem: brand decisions being made without real-time cultural feedback loops. The gap between intention and reception has never been wider. The consequences have never been faster. A brand misstep in 2015 had a news cycle. A brand misstep in 2026 has a meme cycle, a boycott cycle, and a stock-price cycle, all within 48 hours.

Billion-dollar companies are managing brand with tools from 1995.

Cultural trends now move faster than brand processes can handle. Approval workflows crawl. yet many are firing their CMOs for lack of ROI. The tools haven't kept up. The talent hasn't kept up. The workflows were built for a world where you had weeks to respond. Now you have hours.

The year of slop

Slop isn't just an internet problem. It's a brand problem. When content is cheap, meaning is expensive.

"digital content of low quality that is produced usually in quantity by means of artificial intelligence." According to , consumer enthusiasm for AI-generated content dropped from 60% in 2023 to 26% in 2025. And yet 79% of marketers increased AI investment this year. The gap between what brands are doing and what consumers want has never been wider.

Andrej Karpathy predicted it would get worse. He coined the term "slopacolypse" to describe what 2026 would bring: an avalanche of low-quality AI content so vast it reshapes how people consume and trust information online. The name fits.

"We're basically teaching our models to chase dopamine instead of truth."

Edwin Chen, Surge AI CEO, on Lenny's Podcast

The backlash is becoming physical. Friend, an AI wearable company, Within days, they were covered in graffiti: "AI is not your friend." "The best way to make a friend is over a beer." Pinterest added tools to filter AI content. iHeartMedia launched a "guaranteed human" campaign. Apple TV's new show from Vince Gilligan runs a credit: "This show was made by humans."

That phrase, "Made by humans," is becoming a quality label, like "handmade" or "small batch." When everything can be generated, the fact that someone chose to create something becomes the differentiator. In a world where platforms can't verify authenticity, brands need systems that maintain distinctiveness at the source. Not disclaimers. Not watermarks. Distinctiveness baked into the DNA of every output.

Adam Mosseri, head of Instagram, published a 20-slide memo in late 2025 admitting that AI content has won. His exact words: "Everything that made creators matter, the ability to be real, to connect, to have a voice that couldn't be faked, is now suddenly accessible to anyone with the right tools." His solution? Creators should lean into "raw, unproduced, unflattering" content as proof of humanity. Imperfection as defense. But what Mosseri was really saying: Instagram can't solve this problem. Platforms are admitting they can no longer distinguish real from synthetic.

Then came the capstone. Super Bowl 60, February 9, 2026. Fifteen of sixty-six ads featured AI, 23% of the total per Adweek's count. The reception was brutal. The Verge's headline: "AI-generated ads dropped the ball." Artlist bragged about producing their spot in five days, as if speed were the point. The ads weren't offensive. They were forgettable. Interchangeable. Thirty seconds of polished nothing. The most expensive airtime in advertising history, used to demonstrate that AI can produce work nobody remembers. That's slop at scale, and it's the perfect bookend to 2025's slow erosion of quality. The technology got better. The output got worse. Not because the tools are bad, but because the people using them confused speed with quality.

When OpenAI upgraded ChatGPT to version five in mid-2025, there was such backlash from users attached to version four's personality that they had to bring it back. "GPT-5 is wearing the skin of my dead friend," as one user put it. People formed relationships with a voice. That's brand at its most elemental.

The homogenization crisis

The paradox is that brands are adopting AI to differentiate, but AI is making them all sound the same. It's not that the content is bad. It's that it converges. The same adjectives, same rhythms, same safe ideas. When everyone optimizes toward the same statistical center, distinctiveness disappears.

One warning came in 2023. When Italy briefly banned ChatGPT, found that Milan restaurants experienced 15% decreases in content similarity and 3.5% increases in consumer engagement during the ban. Access to AI appeared to drive homogenization.

You might think this was an early-model problem. But research using GPT-4 told the same story. A found that while AI-assisted stories were rated more creative individually, they were significantly more similar to each other. The authors called it a "social dilemma": individuals benefit, but collective novelty suffers.

The visual side is no better. A January 2026 study in Patterns found that AI-generated imagery converges on the same generic aesthetic, regardless of prompt. The researchers dubbed it "visual elevator music." The term is precise. Nobody chooses elevator music. It's just there, filling space, offending no one, moving no one.

The most troubling finding came in 2025. Researchers discovered what they call a : even months after people stopped using AI, their content remained homogenized. The pattern had been internalized. AI didn't just change the output. It changed the people producing it.

Brands are using AI to cut costs while consumers explicitly prefer human-made content. They're automating distinctiveness out of their own identities. And the longer they do it, the harder it becomes to reverse.

Homogenization is the enemy of memorability, and memorability is the foundation of brand building.

The false binary

The creative industry is divided into camps: all-in on AI, or loudly against it. Both camps are missing the point.

This debate has happened before. Charles Baudelaire called photography "art's most mortal enemy" in 1859, then sat for photographs throughout his life. His portrait by Nadar remains iconic. Photography didn't kill painting. It freed painting to become something else entirely. Impressionism, Cubism, Abstract Expressionism all emerged after the camera arrived. The artists who thrived weren't the ones ignoring photography or imitating it. They were the ones who let it redefine what only a human hand could do.

The pattern repeats. Every wave of automation shifts the bottleneck from execution to judgment. Human computers gave way to VisiCalc. Typesetting shops collapsed when PCs hit ad agencies in the 1980s. The work didn't disappear. It moved upstream. AI will do the same to brand work.

In 1998, Paul Krugman predicted the internet's impact on the economy would be "no greater than the fax machine's." He wasn't wrong because he was dumb. He was wrong because he was early. The companies dismissing AI entirely will look the same in five years. So will the ones that surrendered their brand to it.

There's an irony that the companies building these AI systems understand this better than anyone. Anthropic and OpenAI are using their own tools, but they're pairing them with the best agencies, the best photographers, and the best writers. They're not cutting corners on taste. They're investing in human judgment precisely because they know what's at stake. If the companies building AI are betting on elite creative talent for their own brand work, that tells you everything.

What's actually working

Remarkable things are happening where AI meets deep expertise. Demis Hassabis and John Jumper won the 2024 Nobel Prize in Chemistry for AlphaFold, which predicts the 3D structure of proteins. In five years, it's become as fundamental to biochemical research as microscopes. At Harvard, an AI model called PopEVE helped diagnose rare genetic diseases in roughly a third of 30,000 previously undiagnosed patients.

Sam Altman noted in late 2025 that AI-driven scientific discovery arrived faster than OpenAI expected. Mathematicians report that current models have crossed a threshold in how proofs and research workflows operate. OpenAI's internal benchmarks show GPT-5.3-Codex, released February 5, 2026, beating or tying human experts 74% of the time across 40+ business tasks. Six months prior, that number was 38%. This is the compression Amodei described, and we're already inside it.

But the track record of AI in marketing is less inspiring. Most tools optimize for volume, not quality. They make it easier to produce more content, faster, with less friction. That's precisely the problem. The world doesn't need more content. It needs better judgment about what content should exist.

So why build an AI tool for brand work? Because the alternative is worse. Brands are already using AI, badly, through consumer tools that don't understand their context. The same compression is coming for cultural work whether we like it or not. The question isn't whether AI will reshape brand building. It's who's building the tools, and what they're optimizing for.

The invisible seam

Here's what we actually care about at brand.ai: using AI in a way that you don't even know AI was used.

Not the AI slop marketers know too well. Not obvious ChatGPT prose with its telltale cadence. We're talking about AI as an invisible ingredient. Present in the process, absent from the perception.

The best work will be indistinguishable from human craft, not because AI has replaced humans, but because humans have mastered AI as a tool. But mastery requires distinguishing between two uses: learning and creating.

Andrej Karpathy put it well in a : "Learning is not supposed to be fun... the primary feeling should be that of effort." Learning is like going to the gym for your brain. AI can be an incredible tutor, patient and endlessly willing to explain. But it can't do the reps for you. The moment you let it write your brief instead of helping you think, you've traded learning for the appearance of learning.

Creating is different. Once you've developed genuine skill and taste, AI becomes a force multiplier. You know what good looks like because you've put in the hours. The people who skip the learning phase produce slop. They can't tell when AI is giving them mediocre output.

Imagine what Man Ray would have done with this technology. He pushed photography into places Baudelaire couldn't have imagined: rayographs made without a camera, solarization techniques with Lee Miller, fashion spreads for Vogue alongside surrealist films. He used photography to realize what he saw in his mind. But he could only do this because he understood the medium deeply enough to break its rules intentionally. That's the difference between using AI and being used by it. The tool doesn't care about your brand. You have to.

When anything is possible, the bar gets higher. Technology disappears, and what remains is the concept, the vision, the point of view.

In brand work, we translate product truth into culture. We make choices about what to emphasize, what to leave out, and how to earn attention in a crowded world. There's craft in that. There's taste. There's responsibility. And there's always the temptation to take shortcuts. AI is another tool. Brands need to have something worth saying, and they need to say it well.

Lessons from engineering

The software engineering world faced this inflection first, and it's moving fast.

Karpathy in February 2025. He barely touches the keyboard, accepts changes without reading diffs, copy-pastes error messages with no comment. This sounds reckless, but Karpathy is an extremely talented programmer. He's using AI this way because it's fun and fast. For low-stakes projects, why not?

His more recent posts capture the magnitude of this shift: "Clearly some powerful alien tool was handed around except it comes with no manual and everyone has to figure out how to hold it and operate it, while the resulting magnitude 9 earthquake is rocking the profession." And: "I've never felt this much behind as a programmer. The profession is being dramatically refactored as the bits contributed by the programmer are increasingly sparse and between."

Jaana Dogan, a principal engineer at Google, posted in early 2026: "We have been trying to build distributed agent orchestrators at Google since last year. I gave Claude Code a description of the problem, it generated what we built last year in an hour." Simon Willison drew an important distinction: "If an LLM wrote every line of your code, but you've reviewed, tested, and understood it all, that's not vibe coding. That's using an LLM as a typing assistant."

drew an important distinction: "If an LLM wrote every line of your code, but you've reviewed, tested, and understood it all, that's not vibe coding. That's using an LLM as a typing assistant."

"2024 was chatting with AI. 2025 was delegating to it. 2026 is orchestrating it."

Andrej Karpathy's 2025 Year in Review

The engineers who are best at their craft are also the most proficient users of AI. This isn't coincidental. Good engineers have spent decades automating their own work: linters, formatters, test suites. They see AI as a force multiplier, not a threat. The strategists and creatives who will thrive aren't the ones resisting these tools or surrendering to them. They're the ones learning to direct them toward outcomes that require human judgment, taste, and cultural intuition.

The taste bottleneck

The top 1% of writers and creatives are in higher demand than ever. Why? Because they can control and understand these systems. They know when output is slop. They have the taste and judgment to direct AI toward specific outcomes. In a market flooded with mediocre content, the people who can consistently produce exceptional work are more valuable, not less.

The divergence between AI companies tells the story. OpenAI is building feeds. Anthropic employs a philosopher named Amanda Askell to work on Claude's character, asking questions like: what does it mean to bring a new kind of entity into existence? One company is building infrastructure for attention. The other is asking what kind of mind they're creating. That choice maps directly onto the one facing every brand team: are you optimizing for volume, or for meaning?

UI design is collapsing into code. Figma launched prompt-to-app. Cursor has exceptional built-in design capabilities. Designers are shipping features in an afternoon that used to take weeks. But UI component generation is getting commoditized. The bottleneck remains in creative direction, curation, exceptional typography, and beautiful motion.

Engineers adapted by becoming "context engineers," structuring codebases so AI could generate code while understanding the reasoning. As Alex Duffy wrote, context engineering is "less like operating a scientific instrument and more like mastering a musical one." Brand builders face the exact same challenge. The discipline isn't prompt engineering anymore. It's context engineering: building the environment that makes AI produce brand-coherent output by default, not by luck.

Amodei's intelligence thesis holds for brands too. Intelligence becomes the scarce resource that unlocks exponential progress. Emily Segal and K-HOLE proved this with "normcore." A single cultural read, published as a PDF, downloaded half a million times, that reshaped fashion for years. Outsized gains from the right idea at the right time. That's the taste bottleneck. No volume of AI-generated content can replicate a single insight, sharply expressed, at the right cultural moment. ChatGPT doesn't know your blue is a promise, not just Pantone 286. It can't tell the difference between clever and offensive. The solution isn't avoiding AI. It's building AI that actually understands your brand.

The view from the front row

We started building brand.ai in late 2022, right as ChatGPT launched. We saw what was coming before most brands did.

First it was individuals using AI for emails. Then whole teams uploading brand books to consumer tools, each getting slightly different answers about what their brand stood for. By 2024, strategy decks were being run through generic AI, producing five hundred versions of the truth with zero alignment.

Researchers have a name for this now: "workslop." A Stanford and BetterUp Labs study published in Harvard Business Review found that 41% of workers have received AI-generated content that "masquerades as good work, but lacks the substance to meaningfully advance a given task." Each instance costs nearly two hours of rework. For a 10,000-person company, that's over $9 million a year in lost productivity. The tool meant to save time is creating more work downstream.

The scariest moments happen in the gaps between policy and practice. While IT drafted acceptable-use policies, brand teams had already uploaded years of strategy to ChatGPT via personal accounts. One company discovered their team had been using AI for months, shipping ten times more content while nearly publishing a campaign that would have offended multiple cultural groups. Caught only by luck, not process. That's the nightmare scenario: not that AI fails spectacularly, but that it fails quietly, at scale, in ways nobody catches until the damage is done.

We built brand.ai because we knew this was inevitable, and we knew it could be done better. Not by avoiding AI, and not by surrendering to it. By building the infrastructure that makes AI work for brands instead of against them.

Building brand intelligence

Dario imagines a datacenter full of geniuses revolutionizing science. What if brands had the same? A collective intelligence that knows your brand as deeply as your best strategist, but can process cultural signals in real time.

That's what we're building. Not another tool to manage, but a living operating system for brand. Instead of treating your identity as just another dataset, we create dedicated intelligence trained on your brand's DNA: voice, values, visual system, strategic framework.

Here's what that looks like in practice: a social post gets generated, then brand.ai checks voice consistency, flags taboo topics, applies legal guardrails, validates visual system rules, and cross-references against your competitive landscape. It returns a scored report with suggested edits and logs the decision for governance. Permissions and audit trails ensure brand knowledge doesn't leak into personal accounts. Every decision is traceable. Every output is auditable.

But in a world of agent fleets, the real power is in connected applications. brand.ai plugs directly into Figma, Slack, Notion, Linear, and Shopify via OAuth 2.1 and Anthropic's Model Context Protocol (MCP). Your brand intelligence doesn't live in a separate tab. It's embedded in every tool your team already uses. When a designer opens Figma, brand context is there. When an agent drafts copy in Notion, it pulls from the same source of truth. When a Shopify workflow generates product descriptions, it knows your voice. When an engineer files a ticket in Linear, the brand's strategic priorities are part of the context. No separate app. No context switching. Brand intelligence, ambient and always on.

The result is AI that knows your brand specifically, not AI that produces generic output in your fonts. That's the difference between a tool and intelligence.

The year of orchestration

If 2024 was the year of chatting with AI, and 2025 was the year of delegating to it, 2026 is the year of orchestrating it. We've gone from talking to a single model to directing teams of agents that work in parallel across every tool in your stack.

The shift happened fast. On February 5, 2026, OpenAI launched both GPT-5.3-Codex and Frontier, their enterprise platform for agent fleets. Fidji Simo, OpenAI's CEO of Applications, put it plainly: "By end of year, most digital work will be directed by people and executed by agent fleets." Anthropic had already shipped Claude Cowork in January, turning Claude into a teammate that joins your workflow rather than waiting in a chat window. These aren't incremental updates. They're a category shift. The AI industry moved from building better chatbots to building better colleagues.

The numbers back it up. Microsoft reports 160,000+ organizations deploying custom agents via Copilot. Major financial institutions are co-developing autonomous agents with frontier AI labs. Salesforce, ServiceNow, and dozens of enterprise platforms are building agent orchestration layers. This isn't experimentation. This is infrastructure being laid at the pace of cloud adoption in the early 2010s.

For brands, the stakes are immediate. When a fleet of agents handles your marketing operations, writing copy, generating visuals, managing campaigns, optimizing spend, scheduling social, each one needs to understand your brand. Not just your color palette. Your voice. Your values. Your boundaries. The exceptions you've made and why. The campaigns that failed and what you learned. Without that context, you don't have an agent fleet. You have a slop factory with a bigger budget.

This is where context engineering replaces prompt engineering as the core discipline. Prompt engineering was about crafting the right question. Context engineering is about building the right environment: structuring your brand's knowledge so any agent, in any tool, can access it and act on it correctly. It's the difference between handing someone a script and immersing them in a culture.

Aaron Levie, CEO of Box, captured the strategic question in late 2025: in a world where everyone has access to the same intelligence, how does a company differentiate? Context. Generic AI gives everyone the same expert, same strategist, same outputs, same voice, same slop. Context makes intelligence useful. Context makes brands distinct instead of interchangeable.

MCP is the infrastructure layer making this possible. Anthropic released it in late 2024, and by early 2026 it has 97 million monthly SDK downloads. Apple, Microsoft, and Google have all adopted it. People are calling it "the USB-C of AI": a universal protocol that lets any agent connect to any tool. For brand work, this means your guidelines stop being a PDF collecting dust and become MCP-accessible context that every agent in your stack can query in real time. The PDF becomes an API.

Brand guidelines are already executable. We're doing it now at brand.ai. But they're about to become something bigger: the operating layer for agentic branding.

Rules alone aren't enough. The real value is in decision traces. The exceptions, approvals, precedents, and cross-system context that currently live in Slack threads, email chains, and people's heads. Why did we approve that headline? What precedent set that tone? Who signed off on that partnership? That reasoning has never been treated as data. It should be.

Sam Altman argued in late 2025 that memory matters more than raw intelligence. Models are converging; products diverge based on how well they remember context. He called today's memory capabilities the "GPT-2 era of memory," implying we're very early. The brands that capture their decision history now will build a context graph: a queryable record of not just what happened, but why. That's the real unlock for autonomous brand systems. And it's a moat. Decision history compounds. The longer you've been capturing it, the smarter your brand intelligence becomes.

My bet is that within two years, brand guidelines become fully executable. Your brand knows what it should say, and flags what it shouldn't. This happens before content ships. Within five years, real-time cultural adaptation is table stakes. Systems detect weak signals and adjust tone, imagery, and channel mix before trends hit mainstream. The strategist's job shifts from writing guidelines to training systems, from approving content to designing constraints. Static guidelines will feel as antiquated as filing cabinets. The brands that get this right will have systems that learn and improve with every decision. The ones that don't will be stuck rewriting PDFs every quarter while their competitors evolve in real time.

Authenticity as strategy

The "100% human" marketing trend isn't a rejection of all AI. It's a rejection of bad AI. Brands like Patagonia, DC Comics, and Polaroid have staked out positions against AI-generated imagery. Senior executives from LVMH, Kering, Chanel, and Richemont have agreed that machines should never replace people in creative work. Chanel's tech innovation lead called the stakes "reputational, you might even say existential."

But plenty of terrible work predates AI. The world was full of mediocre campaigns, tone-deaf messaging, and brand-damaging creative long before anyone typed a prompt. "Made by humans" isn't a quality guarantee. The real differentiator is whether a human with taste and judgment directed the outcome.

The question isn't whether you use AI. It's whether you're honest about it, whether the result is any good, and whether you can explain your process when asked. In a world where AI can produce anything, the brands that show their work will earn a new kind of trust. Not trust despite using AI, but trust because of how they use it.

Two frameworks help navigate this. Anthropic's offers a practical model. The goal is using AI well, not simply using it more.

approach it from the design side. In their report "From HAL to Her," they identify four territories for how AI presents itself. Mechanization signals control. Magic signals possibility. Biomimicry signals ease. Anthropomorphism signals companionship. Each answers the fundamental questions people ask when encountering AI: Am I being deceived? Am I being replaced? Who's really in charge?

or brands building with AI, these aren't design choices. They're trust signals. Get them wrong and you erode the very thing you're trying to build.

The cybernetic meadow

The cybernetic meadow Brautigan imagined was about harmony between human values and technological capability. Not harmony through surrender. Harmony through mastery. For brands, that means preserving what makes them distinctly human while embracing the intelligence that can amplify it.

We're in what Jack Clark at Anthropic calls a "parallel world" moment. Those working closely with frontier AI systems already live in a different reality than those who don't. By mid-2026, that gap will be impossible to ignore. If you're a brand leader reading this and you haven't felt the earthquake yet, you will. Soon.

The numbers tell the story. AI data center investment accounted for over 90% of U.S. GDP growth in the first half of 2025. A handful of companies will spend more than the inflation-adjusted cost of the entire Apollo program in ten months. That money is building the infrastructure for a world where agent fleets handle the bulk of digital work. Brands that aren't ready for that world are already behind.

The compression Amodei described is here. What took decades will happen in years. Some experiments will break. Some AI outputs will offend. But the shift is underway, and the alternative is irrelevance.

Here's my bet: by 2028, every Fortune 500 brand will have an executable brand system. Not guidelines. Not a PDF. A living context layer that every agent in their stack can query. The companies that start building that infrastructure in 2026 will own the next decade of brand. The ones that wait will spend it catching up.

The winners won't be those with the biggest budgets. They'll be the ones that pair human judgment with machine intelligence to create living systems. Brands that move as fast as culture while staying true to themselves.

The question was never "AI or humans?" The question is: who builds the systems that keep brands coherent when everything around them is accelerating? That's what we're betting on. And if we get it right, we won't just make marketing more efficient. We'll make business more human.

Related Articles