UX/UI Best Practices Won’t Get You the Results You’re After

UX/UI Best Practices Won’t Get You the Results You’re After

You read that right.

Best practices won’t get you the results you’re after.

That may seem counterintuitive…they are called best practices after all.

The truth is, best practices are a good starting point. But, when it comes to UX/UI design the only way to surpass competitors that are also all leveraging best practices is to do something different.

Why ditching best practice is the new best practice

It doesn’t matter if you operate a B2B or B2C business, your audience wants a personalized experience that answers their questions and addresses their needs. The status quo just won’t cover it.

The status quo is about playing it safe: minimalistic layouts, hamburger menus on mobile, CTAs placed above the fold, and standardized color schemes.

These are the design “rules” that many companies rely on because they’re considered best practices. But, these rules meant to appeal to the widest audience possible won’t leave you room to address the unique needs of your users. You have to differentiate to get ahead.

So, how do you ensure that doing something different won’t result in a worse conversion rate?

Well, you can’t.

Stepping outside of best practices requires you to take some risks. And, when you take risks, they don’t always pan out.

That said, there are ways to limit your risk exposure and still break free of the chains of best practices. Here’s what our Chief Creative Officer, Erin Hunt, had to say on the topic.

Understand your target audience

Tailoring user experience to your target audience requires you to actually understand what they want and need.

“Talk to your customer support team, talk to your sales team, read every review, ask for direct feedback from your customers, set up heat mapping and watch how users are engaging with the site. Start compiling all of that feedback in a spreadsheet and look for common threads,” said Erin Hunt.

It can be tempting to make assumptions about what your customers need or want but real user feedback is invaluable to your success. Don’t skip this step.

Create your hypotheses

You’ve gathered feedback and identified common threads, it’s time to turn those insights into problem statements and hypotheses for how you can improve the experience. 

  • What are the pain points your audience is experiencing?
  • Are there opportunities to drive more relevant traffic?
  • Can you streamline the customer journey or provide a more engaging experience? 

Hunt explains, “Think of each hypothesis as a potential solution to a specific problem your audience faces. For example, if users are bouncing from a key landing page, hypothesize why: Is it slow loading times? Unclear messaging? Poor ad targeting? A lack of compelling CTAs?”

By creating focused, testable hypotheses, you’re setting the stage for informed experimentation—where every change has a clear purpose tied to your user feedback.

Set up A/B Tests

Isolate one variable to change at a time—whether it’s a headline, call-to-action button, or the placement of key elements on your page. “The goal isn’t to overhaul everything all at once,” explains Hunt. “It’s about making incremental changes and measuring their impact.”

Use a tool like VWO or Optimizely to run your test, and make sure you have a clear metric in place to evaluate success.

Depending on how much traffic your site is currently receiving, statistically significant test results can take some time. If you get a lot of traffic to the site you may want to limit your test size to a smaller portion of your traffic to minimize impact.

Review test results & implement changes

Most A/B testing tools have built-in analysis features to alert you when the test you are running has reached a statistically significant result for the metric you’ve specified, whether that be conversion rate, bounce rate, time on page, or something else.

While that primary metric is key to setting up the test, you’ll also want to look at other metrics that may have been impacted before selecting a winner and implementing the change.

Don’t worry if your test fails.

Take that new information and use it to iterate and test a new hypothesis. The whole point of A/B testing is to try something new and see if it improves results.

Remember, if you aren’t failing, you aren’t testing enough.

Balancing data, time, and risk in UX/UI design

A/B testing is incredibly valuable for how quickly and often inexpensively you can launch and test an idea. You receive a level of quantitative confidence that signals whether a change is worth pursuing.

However, blindly focusing on metrics can snowball into a mindset of continuous iteration on one small detail that can prevent you from thinking about big-picture growth and evolution.

You become hyper-focused on beating the control version in more incremental ways. Suddenly, a test that took a few days becomes an ongoing time investment that spans weeks and months with very little to show for it.

Time that could have been spent exploring riskier but more delightful and disruptive innovation. Changes that may not immediately pan out but have greater potential to resonate with customers in the long term.

So, if best practices aren’t leading to lasting results, should you just listen to customers and swing for the fences? Well no, because the short-term disruption may not leave you around long enough to see the results.

It’s really about finding balance. Use the insights from your audience to implement and test sustaining innovations while betting on larger ideas that will strengthen their connection to your brand.

Need help getting started? Let’s chat.