Koogle d*EPiC SWYNme BITLYme EmailMarketingTipps

Articles



tactics Profit-Maximizing A/B Tests [PDF]

arketers often use A/B testing as a tactical tool to compare marketing treatments in a test stage and then deploy the better-performing treatment to the remainder of the consumer population. While these tests have traditionally been analyzed using hypothesis testing, we re-frame such tactical tests as an explicit trade-off between the opportunity cost of the test (where some customers receive a sub-optimal treatment) and the potential losses associated with deploying a sub-optimal treatment to the [...] [more] 
arxiv.org    Intelligence, Test

discussion Guidelines For Ab Testing

1) Have one key metric for your experiment. You can (and should!) monitor multiple metrics to make sure you don’t accidentally tank them, but you should have one as a goal. Revenue is probably the wrong metric to pick. It is likely a very skewed distribution which makes traditional statistics tests behave poorly. See my discussion in my A/B testing talk (around the 23-minute mark). I generally recommend proportion metrics. First, you often you care more about the number of people doing something than [...] [more]
hookedondata.org    Test

tactics The Bayesian Logic in A/B Testing

Our new API feature for A/B testing transactional emails uses a statistical algorithm, Bayesian Logic, for picking the winning variant of the message. [...] [more]
sparkpost.com    Test

tactics Increasing Response Rates to Email Surveys in MOOCs [PDF]

To investigate ways of increasing email response rate, we designed experiments that manipulated the textual elements of the emails. We conducted experiments in a MOOC setting, with email surveys sent out to over 3,000 learners. The emails were sent to elicit responses as to why learners were not engaging with the course. We found that response rates were significantly increased by varying how closely emails were framed as pertaining to a learner's personal situation, such as by changing introductor [...] [more]
acm.org    Subjectline, Copywriting, Study, Test

tactics Personalized Calls to Action Perform 202% Better Than Basic CTAs

For this study, I analyzed more than 330,000 CTAs. There are three primary types of CTAs I looked at in this post: Basic CTA -- This is a call-to-action that does not change based on any attributes of the visitor. It's the same for every visitor that sees it. Multivariate CTA -- These are similar to Basic CTAs, but instead, there are two or more CTAs being tested against one another. Traffic is typically split evenly to each variation and then you [...] [more]
hubspot.com    Design, Clickrate, Test

discussion Multivariate vs. A/B Testing: Incremental vs. Radical Changes

In the world of design-optimization methods, A/B testing gets all the attention. Multivariate testing is its less understood alternative, often deemed too time-consuming to be worth the wait. While this method has its limitations, they are counterbalanced by its benefits, which cannot be easily achieved using A/B testing alone. [...] [more]
nngroup.com    Test

stats Email Marketing Priorities and Budget Changes for 2018

Personalization, automation, and A/B testing topped the list of 2018 email marketing priorities for brands, according to Litmus’ 2018 State of Email Survey. [...] [more]
litmus.com    Study, Test

tactics 13 Email A/B Testing Mistakes that Limit Your Success

1. Test your automated and transactional emails, not just your broadcast and segmented emails. Nearly 39% of brands never or rarely A/B test their broadcast and segmented emails, according to Litmus’ 2017 State of Email Creative report. That’s a missed opportunity, but there’s even more money left on the table when you look at automated and transactional emails. More than 65% of brands never or rarely A/B test their automated emails and 74% never or rarely A/B test their transactional emails. [...] [more]
litmus.com    Test

 stats How Long Should You Run Your A/B Test?

For opens, we found that wait times of 2 hours correctly predicted the all-time winner more than 80% of the time, and wait times of 12+ hours were correct over 90% of the time. Bar chart of data showing that the likelihood of selecting the correct winner in an A/B test based on opens increases over time. Accuracy of results reaches 80% after 2 hours. Clicks with wait times of just 1 hour correctly chose the all-time winner 80% of the time, and wait times of 3+ hours were correct over 90% of the time. [...] [more]
mailchimp.com    Study, Test

discussion 14.5% increase in job applications with a simple strategy change

Targeted Email Campaigns are highly effective to boost quality job applications on your site. We recommend you test various approaches to determine how your audience responds. And, as a bonus, once you begin to experience success and monitor the results, you can offer this service to your employers. From our prior research, approximately less than 10% of job boards offer a service like Targeted Email Campaigns. [...] [more]
medium.com    Subjectline, Test

tactics Weekend bias in send time optimisation

Own the assumptions of your algorithm and measurement process as real life will always be different from the lab and you want to measure your results, even if you know your model oversimplifies the reality. Think about alternatives of A/B testing, not only for making decisions but for measuring your existing algorithms as well. You might have problems either with your machine learning algorithm or with your measurement process: watch out for both! [...] [more]
craftlab.hu    Intelligence, Test, Sendtime

stats Econsultancy Conversion Rate Optimization Report 2017 [PDF]

Despite 38% using abandonment emails for CRO and a further 38% having plans to use them, less than a third (32%) consider them to be highly valuable for improving conversion rates. A/B testing and usability testing increased in perceived value for brands this year. On the other hand, website personalization, which is typically difficult to implement with consistent success, has decreased in value. This could be a result of the challenges companies experience, with negative consumer perceptions of [...] [more] 
redeye.com    Automation, Conversionrate, Study, Test

tactics Without or with the newsletter sign-up overlay?

Butlin’s, the popular UK-based family resorts site, teamed up with RedEye, a CRO and marketing automation firm, to conduct this enticing email capture test. The goal was to determine if an adding a signup overlay would increase newsletter sign-ups, without disrupting conversions [...] [more]
behave.org    Listbuilding, Test

tactics The surprising power of subject line emojis – Daniel Betts – Medium

Result: The test went out to 25% of our prospect database, split evenly between the two subject lines. We had 9% more opens on the emoji subject line. Result. Despite the increase in opens, click data was largely the same – but at least we were getting in front of more eyes. [...] [more]
medium.com    Subjectline, Test

tactics Testing Strikethrough using Unicode in subject lines

A customer requested to use strikethrough in a subject line. It might sound like a simple thing to do, but as it hadn't been done before by our campaigns team, it required sufficient testing. Two things struck me with this request: 1. Will this work on most devices? 2. Will it actually add value to the Subject Line? [...] [more]
communicatorcorp.com    Subjectline, Test
Page 1 | older