the worst advice you’ll ever get

Having worked for both the client side and the agency side I’ve heard and said many things that are “best practice” and “good advice.”  Of course I was always right, ok that’s a lie, I’m sure I gave bad advice at times just like everyone else, I would’ve thought it was a good idea at the time but we can’t always be right (gasp!).  So how do you figure out what is good advice or bad advice?  How do you know if the “trusted adviser” you work with is actually giving you good information?

Let’s look at signs you are getting bad advice:

  • They speak in absolutes.  Nothing is quite as simple as “always and never” in this world.  For example, your subscriber base may be responsive to three emails a week, they may not though.  How do you find out, by testing and giving them choices in their subscription preferences.  If a consultant marches in and says “never” or “always” to your sending frequency they don’t show much knowledge of your situation.
  • They compare you to the “industry” at large.  Let’s face it, as much as we all wish there was such a thing as “industry” benchmarks that really did apply to what our companies do and who they are speaking to so we could compare apples to apples there just aren’t. Be smart, benchmark against your own performance and be discerning, if you plan to use industry benchmarks know who is considered in this “industry” you are comparing yourself to.  Understand if it’s a true comparison or a thumb in the wind.
  • They do not understand the technology.  Let me be clear here, your strategy consultant may not be a Marketo-whiz or an Eloqua-guru but they should know what the technology is capable of and how you can best optimize your use of it.  In broad brush-strokes they should understand the tools to recommend what you need for your specific situation and if need be, they should be able to help you find the technical expertise to fulfill your needs.  If they cannot help you understand what you need to get the job done give them the boot.

At the end of the day the key is to find a partner or strategist (either internal resource or external) who really can dig in and be specific.  Speaking in platitudes and generalizations will get you exactly no where.

test me, i dare you

Remember how excited you were about testing sitting around the conference table?  Yea I know, now it’s real work but a little elbow grease will yield good, actionable results so stick it out.

At this point you have a good feel for what your assumptions are…will the blue CTA win out?  Will the text-only email perform better than an image-heavy version? Take bets and make some guesses.  The only way to find out is to let it go then sit back and wait.  Here comes the least popular thing you can tell a marketer…one test doesn’t give you all the answers.  You want to drive results right?  This is all about conversions and what does best so you have to know whether or not you have significant results.  No, it’s not as simple as saying email A got an 8% open rate and email B got a 9% open rate so BOOM email B worked!  It’s a little more than that.  Some marketers are lucky enough to work with statisticians on staff, but that’s a rarity.  So how do you decide something is significant?  Math of course (scary, I know).    In textbook terms you need to run significance testing.  Good news, there are calculators online for that!  Bad news, you still have to know things to be able to use them effectively.  So before I point you to the calculator here’s what you need to know:

Low Confidence Level is defined as being in the range of 90-95% while High Confidence Levels are defined as 95-99%+. If you plan to conduct multiple tests or have low traffic, go with the Low level. If you have high traffic or “significant business value flowing” through the page, go with the High Confidence level.How do I know this you ask?  Because I love to read and I spent some time reading a great book on Landing Page Optimization.  It’s a well written book for those who want lots of detail on Landing Page optimization, definitely recommend it, however for your “Cliff Notes” version of testing all you need to know is whether you are going with Low or High confidence.   The place to use this new found knowledge?  Well this handy calculator of course!
So what happens if you are testing email and run the results through the calculator, and GASP! your results aren’t significant??  Run your test again!  No, I don’t mean email the same people with the same stuff all over again…not unless you want to make your audience angry.  What I mean is develop a new email that tests the same hypothesis and send it to the same folks again. Don’t be afraid to retest your assumptions and even known results over time.  Your audience changes and so do the things that cause them to react, so don’t be afraid to revisit old tests to see how things change.
What if you aren’t testing email, what if you’re building a fabulous landing page and need to do some testing there, how do you determine significance if you don’t have fancy schmancy software to help you out? Well you can read all about it here on this great post by Hubspot where they even have a handy spreadsheet to use in calculations.

easy as a…b…multivariate?

The scene: a small conference room full of marketers, the problem, one of them wants to “test” things.  Everyone smiles, nods, knows it needs to happen and then walks away happy no real plan in place and no idea what the plan should even be.   Where do you go from here and what do you do with it all?

Start off by adopting a framework for testing and a methodology.  This is a step all too often skipped.  Think back to the days of elementary school science fairs…scientific method anyone?

  • Formulate a question – What are you testing?
  • Develop a hypothesis – Which if the items being tested will be well received?
  • Predict the results – Make some assumptions
  • Perform your test – Do work!
  • Analyze the results – Measure, measure, measure

First, decide what kind of test you are going to do, here are a few options for you:

A/B or Split testing: This involves two versions of the same thing (be it a web page, email, asset, etc.), the plan is to divert half of your test subjects to one version of the asset and half to the other.  In this scenario you make a single change in the two versions, variations could be as simple as changing the color of something, placement of a call to action, font being used, day of the week an emails is sent, the possibilities are endless.

Multivariate testing:  The process that allows testing of multiple variations in the same test.  In a mutivariate scenario you make many different changes in an attempt to find the biggest impact the fastest across your experiment.

The important thing to know here, if you are doing mutivariate testing on a landing page/web property you will need substantial traffic to find real answers.

Next, set up your test.  Depending on what tools you are using you can go crazy at this stage.  If you are running web tests there are literally tons of great software tools available to test with from Google’s own testing tools available through Google Analytics to KISSMetrics to Optimizely or even native apps inside Marketing Automation Platforms.  If you are running an email test you might need to build out some programming inside your automation platform but at this stage in the game you have to put all the pieces together inside the technology.

Where do you go from here?  Wait and see..that’s up next.