I - Always Testing, Never Learning.
For nearly a decade, I’ve been obsessed with one question
Does it sell?
It’s a dangerous question.
It turns you into a direct response marketer before you realize what’s happened.
I hate not knowing.
I hate dashboards with 4,000 rows that create more questions than answers.
I hate waiting months to find out how an ad performed.
I hate meetings where everyone hedges “positioning” strategies and no one can tie it to metrics that improve the bottom line.
If you’ve felt that frustration, you know exactly what I mean.
The unfortunate thing is that you can care deeply about productivity, performance, and testing, and still not find out what actually works.
Goodness knows, the first decade of my career, I made that mistake. 🤦♂️
I believed that hard work, copying the pros, buying expensive memberships, hoarding frameworks…
Testing, launching, running post mortems…
Would lead to reliable methods to consistent, repeatable wins.
In layman's terms: hustle till it rains. 🤑
And I worked my hind tail off in the name of this belief:
I produced and tested over 1,800 video ads (many of which I wrote).|
I spent thousands of dollars on high-level courses and masterminds.
I worked inside two serious direct response companies, one doing over $50M a year.
It looked like I knew what I was doing. And by most standards, I was. I frequently heard from supervisors and employers that I was “killing it.”
But deep down, I knew something wasn’t right, because even though I’d achieved multiple quick wins…
They’d never last…
And we doomed to repeat the purgatory of:
Finding a winner (after multiple rounds of tests)
Scaling the winner, and beating the control
Watching the “new” control inevitably fizzle out
This horrific cycle always ended with the dreaded call where everyone asks:
“How do we do that again?”
(Spoiler: The reason we couldn’t repeat the win is we didn’t isolate the variable that had actually driven the result.)
So we’d end our post-mortems with a call to arms to just create more ads.
Joy.
Here’s what I know now I wish I knew then.
II - How “Successful” Marketers Sabotage Their Own Learning
In ancient Athens, long before James Clear, Aristotle was already wrestling with this problem.
We’ve all heard this line before:
“You don’t rise to the level of your goals. You fall to the level of your systems.”
It’s funny. Recently Chris Williamson dropped a video with the title “atomic habits lied to you (kinda).”
Except Chris Williamson didn’t nail the way I’m about to. The timing was uncanny because I had recently started exploring a new approach to self help (and marketing) inspired by Aristotle.
It’s because while systems are helpful, Aristotle realized that systems needed direction.
He understand that growth requires a self-regulating structure — something that generates variation, but also constrains it. Without that governor, potential doesn’t become actual. It just disperses.
In plain english, systems in and of themselves are not the goal. We are pursuing specific outcomes. If your testing process doesn’t function as a learning engine leading towards those outcomes — generating ideas while isolating variables and refining assumptions — you don't move in a profitable direction.
If you don’t believe me, watch what happens in the absence of that learning engine.
Founders and operators say they want predictability.
They say they want stable growth.
They say they want clarity.
But when revenue tightens or CPA creeps up, the reflex is almost always the same:
More execution.
More hooks.
More angles.
More creatives.
More optimizations.
Because execution feels controllable. Questioning the system does not.
Execution creates the sensation of progress. System design creates uncertainty.
The problem is simple: execution alone does not generate insight.
It generates data points.
And data without isolation generates noise.
Let me show you what that looks like in real life.
When you’re inside a high-output direct response team producing hundreds of ads a month, pressure compounds quickly. If no one is learning, everyone starts compensating in ways that make the problem worse.
Founders begin to question whether the team can pull through.
They get more involved.
There are more meetings.
More projects.
More debate.
Well-meaning contributors scramble to help. They build new dashboards. Pitch new angles. Suggest new tests.
Everyone is working harder.
But success becomes fragile.
Results spike.
Then fade.
Then reset.
Nothing stabilizes.
This is where another concept becomes useful.
In biology, there’s something called degeneracy.
It means different internal causes can produce the same outward result.
Imagine three teams launching three ads. All convert at 4.5%. Same CPA. Same revenue. Same apparent success.
But internally:
Team A understood the audience precisely.
Team B used a creative format that captured attention at the right moment.
Team C caught a trend early and rode it.
From the outside, the numbers look identical.
Underneath, the mechanisms are completely different.
In other words, you’re staring at a black box.
You know it worked.
You don’t know why.
And because it’s nearly impossible to know why, they react as most sensible people do.
They shrug their shoulders, celebrate the win, and ask the creative team to “keep on doing what they did.”
Did revenue increase?
Yes.
But did the company isolate the causal variable so it can be reused deliberately?
No.
So they scale the result without understanding it.
They worship the winner.
Then they try to repeat it later and can’t.
That is degeneracy.
It creates the illusion of understanding where none exists.
III — The Real Problem
We live in a world where everyone believes they are “data-driven.”
They know they need:
Attribution
Dashboards
Performance signals
Attribution tells you what happened.
It does not tell you why it happened.
If a test changes the hook, the visual structure, and the offer framing at the same time, and performance improves, you have not learned which variable mattered.
You have learned that something in that combination worked.
That distinction is everything.
If your process does not force you to isolate assumptions one at a time, you cannot systematically accumulate understanding.
And without accumulated understanding, every win sends you back to square 1:
Guessing. (Yuck.)
This is where strategy gets misunderstood.
Most people believe that strategy is high-level planning or market research.
This is confusing what strategy does (activities) with what it’s actually trying to accomplish.
A correct definition of strategy is this:
Strategy is the disciplined structure that turns action into learning.
It determines what gets tested, in what order, and which assumptions are isolated.
It also determines how results update the team’s model of the market.
Everything changes when you infuse this structure into your marketing.
Teams stop over-fitting explanations and stories in an attempt to explain performance.
You have fewer, more focused meetings.
You start to focus on the variables (which specific elements of your messaging) that lead to results.
Brainstorming is now focused discussion anchored to specific hypotheses and specific outcomes.
There’s a shift that happens where instead of creating a culture of “winner-worship,” teams start looking behind the win and isolating the variables.
IV - If I could Start Over
Earlier I told you that for nearly a decade I labored under the same mistake:
A drive to produce, but not really learning.
And one simple question would have surfaced exactly why:
“What specifically caused our last win?”
Back then, I would have probably said something like:
“The skip stopper was more hypnotic and related to the problem mechanism.”
It sounded smart back then but I’m rolling my eyes thinking about it now.
In all fairness, I didn’t know better…
I thought that an accurate description passed the bar.
And most people did.
But a real explanation names the variable.
It tells you what changed relative to the control.
Here’s an example:
“In this ad we tested new benefits positioning for lotion. Instead of saying it “hydrates quickly,” we focused on how it “isn’t greasy.” We found that the ad with “isn’t greasy” had a higher conversion rate, and low cost per acquisition at relatively low spend. Because this variable led to significant lift, I believe we should try new versions of our controls with this core message.”
Now THAT’S learning.
V - What Changes When You Learn
When a marketing team reorganizes around learning instead of output, the structure changes.
And when the structure changes, behavior changes with it.
That’s emergence.
Emergence just means this: when parts of a system interact under clear conditions, new possibilities arise that weren’t before.
Applied to marketing, that means once you impose a disciplined learning loop (isolated variables, defined assumptions, clear contrasts) the entire department starts operating differently.
You don’t need more rules. You need better constraints.
In most companies, activity is rewarded:
Quick ideas.
Confident opinions.
New angles.
Those behaviors thrive because nothing filters them.
When learning becomes the center, the filter appears.
A vague explanation doesn’t move forward.
A test that changes five things at once doesn’t count.
A “this felt stronger” argument doesn’t survive review.
The team adjusts.
People show up with defined variables.
They reference specific deltas.
They build on prior tests instead of replacing them.
You feel the shift almost immediately.
Meetings get shorter because the question is precise.
Debate narrows because disagreement resolves into experiment design.
Momentum builds because you aren’t restarting every month.
The relief isn’t emotional.
It’s structural.
The system makes sense again.
You can trace cause.
You can refine instead of reset.
You can see skill accumulating over time.
Most companies don’t have a creativity problem.
They don’t have an effort problem.
They have a learning problem.
Fix the structure, and the right behaviors start to emerge naturally.
Skill compounds because judgment improves.
And improved judgment under pressure is what founders actually want.
