Back to BlogCRO

The Store Owner Who Tested 47 Different Button Colors (And Why It Didn't Work)

Sarah had read all the blogs.

She knew CRO was important. She knew A/B testing was the path to higher conversions. She knew that "data-driven decisions" were the key to success.

So she did everything right.

She installed VWO. She set up her first A/B test: green button vs. orange button. She waited for statistical significance. She declared a winner.

Orange won. 0.3% higher conversion rate.

Thrilled, Sarah moved on to her next test. Font size on the product title. Then she tested header image. Then she tested the exact placement of trust badges. Then she tested adding a countdown timer.

Over 18 months, Sarah ran 47 A/B tests.

Her conversion rate went from 2.1% to 2.4%.

An improvement, sure. But given the hundreds of hours she'd spent designing tests, waiting for results, and implementing winners... was it worth it?


The Uncomfortable Conversation

At an ecommerce meetup, Sarah struck up a conversation with someone who ran a similar store.

"What's your conversion rate?" Sarah asked.

"Around 4.2% last time I checked."

"Wow. How many A/B tests did you run to get there?"

The other store owner laughed. "Maybe... two? I just fixed the obvious stuff."

Sarah felt her stomach drop. "What do you mean, 'obvious stuff'?"

"My checkout was a disaster. Four steps when it should have been one. Shipping costs hidden until the last page. No guest checkout. Once I fixed those, conversions basically doubled. Haven't tested much since."


The CRO Theater Problem

Sarah had fallen into what I call "CRO Theater"—the appearance of optimization without actual optimization.

She was running tests. She was following "best practices." She was being "data-driven."

But she was optimizing the wrong things.

> "I know there's a million ways to do CRO, I've done LinkedIn courses, etc. but what's actually worked for your business? Especially if you have some easy/quick changes that produced great results and didn't require a 6-month overhaul."

The truth: Button colors don't matter when your checkout is broken.

Font sizes don't matter when visitors can't find the add-to-cart button on mobile.

Trust badge placement doesn't matter when you have no value proposition.

Sarah was polishing the brass on the Titanic.


What Actually Moves The Needle

Here's what we've found after auditing hundreds of ecommerce stores:

High-impact changes (can move conversion 50%+):

- Fixing broken functionality

- Reducing checkout friction

- Removing surprise costs

- Adding missing payment options

- Clarifying value proposition

Medium-impact changes (10-30% improvement):

- Page load speed

- Mobile experience optimization

- Trust signals in the right places

- Better product photography

- Clearer calls to action

Low-impact changes (usually <5%):

- Button colors

- Minor copy tweaks

- Font choices

- Layout shuffles

Sarah spent 18 months on low-impact changes while her high-impact issues sat unfixed.


Why Do Smart People Make This Mistake?

A few reasons:

1. Low-impact tests are easy to run.

Changing a button color is simple. Redesigning checkout requires decisions, possibly developers, and carries risk.

2. Small wins feel like progress.

"Orange won!" feels productive. It's measurable. It's satisfying. Even if it doesn't really matter.

3. Nobody told them what the real problems were.

> "I've been suffering from paralysis analysis and haven't touched [Google Analytics] in ages."

When you don't know what's actually broken, you test randomly.

4. CRO gurus focus on testing methodology, not prioritization.

The CRO industry makes money selling testing tools and courses. "Fix the obvious stuff first" isn't a sexy message.


The Better Approach

Before you run a single A/B test, answer these questions:

1. What's the single biggest drop-off point in my funnel?

(If you don't know, you're not ready to test)

2. What are customers actually complaining about?

(Check support tickets, reviews, survey responses)

3. What's broken?

(Watch session recordings. Test on multiple devices.)

4. What would I fix if I could only fix ONE thing?

(That's your priority, not button colors)

Only after you've fixed the obvious, high-impact issues should you start running micro-optimization tests.


Sarah's Epilogue

After the meetup conversation, Sarah went back to her store with fresh eyes.

She watched 50 session recordings.

She saw:

- Mobile users struggling to scroll past her massive hero image

- Checkout abandonment when "calculate shipping" spun forever

- Product pages missing size information (her #1 support question)

- A broken coupon code field that crashed the checkout

In one weekend, she fixed all four issues.

Her conversion rate jumped from 2.4% to 3.6%.

All those A/B tests over 18 months had given her 0.3%. Four bug fixes in one weekend gave her 1.2%.


What's The Obvious Thing You're Ignoring?

You probably already know something is wrong with your store. There's a page that doesn't feel right. A checkout flow that seems clunky. A mobile experience you've never actually tested.

That's not where A/B tests start. That's where fixes start.

Our CRO Audit doesn't just find things to test. It finds things to fix—in priority order.

Stop polishing the brass. Start plugging the leaks.

Ready To Fix This For Your Store?

We'll find your specific issues and give you a prioritized action plan.

Get Your CRO Audit