30 second summary:
- Blogs are a valuable and insightful arm of a brand’s marketing strategy.
- The downside, of course, is that industry blogs are filled with untested theories and can resemble echo chambers if only you rely on them.
- Marketing innovations don’t come from reading the work of others, but from continually testing and trying new approaches to age-old problems.
- Sarah Fruy explores new testing strategies that marketers can explore to maximize the value of their funnels.
If more than a decade of work in the marketing trenches taught me anything, it is that no silver bullet will make all your problems go away.
Unfortunately, there is a misconception that a single answer is available via a simple Google search. A marketer finds success with a certain tactic, blogs about it, and encourages everyone to use the same strategy. Before you know it, this tactic is featured in a list and more and more people are considering it a best practice.
Rinse and repeat.
While this tactic is the right strategy for your business, chances are it isn’t. I strongly recommend scouring the internet for helpful advice, but you need to test this theory with your own audience on your own platforms. Consumer behavior is constantly evolving. As effective marketers, we need to test our theories as often as possible to avoid costly mistakes.
Building a test culture
Marketers who neglect testing are more likely to use a waterfall approach than an agile method. They believe that success relies on starting big bang campaigns, with a long planning period leading to a major release. They value their instincts about data-driven decisions.
This could be due to a lack of knowledge of agile marketing practices or an organization that needs many levels of approval before getting started. Others believe that success consists in following in the footsteps of “bigger” marketers and implementing their playbooks as well as possible. They think, “If it worked for either way, it will work for me.”
Others might cite the budget as an obstacle. But even a bootstrap startup with no budget can find ways to test and validate ideas before going all-in. A large budget does not prevent failure, as larger companies also suffer from premature releases of products and ideas. For example, in recent years Microsoft has built a reputation for rolling out clunky products and campaigns – from Vista to broken chatbots – that have suffered from rollouts.
Who would want to risk failure with so much time and money? A few factors will help you better analyze your campaigns and set up a more successful testing program. Use these tips to build the testing culture you need to be successful:
1. Work with a cross-functional group
A recent Deloitte report found that 89% of executives cited organizational design by teams as the top priority for overcoming challenges in their company.
Building cross-functional teams with people from different departments and skills enables faster communication and decision-making. It also adds more diverse perspectives and experiences to the conversation so you can interpret data points from different angles and come up with more creative test ideas.
If you are not ready to completely reorganize your operations, first establish an outside-in approach to generating ideas by bringing in members of other departments for input and new concepts. Even a single variant perspective strengthens the ideas you develop for new campaigns and helps you identify potential problems before implementation.
2. Do not perform one-off tests
The initial goal of an agile marketing team is usually to release a product with minimal viability, then test the water and see how a selected segment of your market is reacting.
When you get a signal that the campaign is working, you’ll develop and reinforce that success. If the campaign doesn’t meet your expectations, repeat your approach or switch to a new program. Using the data as a guide, you’ll build a complete testing culture so that you can rely on any initiative you provide.
At the beginning of this year we decided to test exit modalities on our website. The first results were positive, so we scaled the number of pages and continued to achieve success. Our next step was to personalize the experience and test different types of creative messages.
As you can see, an idea: “Should we implement exit modalities on our site?” – generated an ever-growing list of test ideas for our team. A well-structured program will provide plenty of tests to prove a hypothesis before it is ready for full exposure. So if you stop on a one-time test, a lot of valuable data goes undetected.
3. Do not optimize for a single metric
It’s easy to set up a test and optimize for clicks or form fills. However, you may also want to consider the long-term implications of short-term gains.
Maybe more people are signing up for your free trial, but they are also working at a higher rate. If you just optimize interest in signing up, you will miss out on the bigger insight later: this new audience will actually decrease your overall sales. To illustrate why you can’t focus on one area, Homejoy, a household cleaning startup, is the place for you.
The company put a lot of resources into a single metric – customer acquisition. A first-time customer promotion price of $ 19 contributed to the growth. However, the business mainly attracted a customer base interested in the discount. Without focusing on customer loyalty, Homejoy returned only 25% of those homeowners and growth stagnated, which accelerated the company’s failure.
4. Include both quantitative and qualitative research
A few years ago I was under pressure from my startup company to sponsor an event to promote our new product. As we were in the early stages of product development, our customer personality was not fully developed, but the opportunity guaranteed press coverage and a lot of traffic in the market, which we beta tested. So I took a chance and got us a booth.
Unfortunately, we found that the audience at the event did not suit us – something we would have learned if we had attended and interacted with the audience firsthand a year earlier.
Running tests can be very exciting when you reach statistical significance. However, don’t let this evidence stop you from actually speaking to your audience. Make sure you include open questionnaires or user groups in your research program. Try to balance your research so that you understand not only how a user reacts, but also why the user reacts that way.
No single idea or marketing initiative will work for everyone, no matter what the blogs say. Don’t rely on untested insights to drive your campaigns forward. Instead, do your research, narrow the scope down to your needs, and test each new plan to make sure you have a real winner on your hands.
Sarah Fruy, Director Brand and Digital Experience, leads the strategy and goals for the Pantheon website and branded content. You can find Sarah on LinkedIn and Twitter @sarahfruy.