Making it easier for small publishers to bolster their subscriber base

Why did we choose this problem? In our research phase, we learned that many small publishers have a hard time gaining a large-enough initial cohort of meaningfully engaged users to monetize through membership, subscriptions, or donations. In order to begin converting to subscription or membership services, publishers need a significant user base and dataset to make that jump.

Just getting that initial traction is a hard challenge for many. We know, for instance, that email drip marketing has been super effective for membership-driven news orgs, but just getting started can be a pain. There are suites of premium services and an industry built around this, but we think the time, budget, and sometimes tech expertise required to benefit from these tools can strain the folks who most need them — and who can do the most good —  in this space

Publishers we interviewed seem to be commonly engaged in starting or maintaining newsletters or email lists already, so this seemed like the best place to start to collaborate and help solve “top of funnel” acquisition and conversion problems.

An opt-in tool we haven’t named yet

We have started designing an opt-in app, an “opt-in” being a call-to-action to sign up for either a newsletter or join a site that are strategically positioned to — well — get folks to sign up. You’ve seen them before: These are those cards that slide-in or slide-up from the bottom right on the web, or that popup in your face just as you start reading content.

I’m making this super annoying example because while there are plenty of services who offer a variety of these, there’s definitely a right way to use them. We think there’s an opportunity to incorporate engagement and user experience metrics in a service like this to ensure news orgs have all the information they need to use opt-ins effectively.

We’re interested in opt-ins not as an end unto themselves, but because they can be a neat vector to help answer the pressing questions we, and many other small publishers, have:

  • How might we become smarter about newsletter subscriber acquisition for small-scale journalism publishers?
  • How might we serve those calls-to-action in the best places and at the best times to maximize return on investment?
  • How might we give publishers the ability to run their own experiments, like small-scale A/B tests on messaging, to maximize their return on investment?

WordPress and MailChimp

Of the publishers we interviewed, most were already using WordPress and MailChimp to manage their site and email lists, respectively. For this project, we plan to build a tool that is as easy as possible to get up and running, so we’ve committed to building this out as a WordPress plugin that is ready to connect with MailChimp out of the box.

Usability Test Phase One

In working on a tool with the end-goal of making its users better informed through good data, we’re doing our best to walk the walk. Our front-end engineer Al Delcy mocked-up a high-fidelity prototype and user flow summarizing our best-guesses for what early adopters might want and how their setup, onboarding, and use might look, which we then turned over to the Information Experience Lab for usability testing.

Usability test phase one

Over the course of a month, there were 10 participants in their study observed performing various tasks, from installing the plugin to designing, previewing, and publishing their opt-in, about which they were asked to rate each task on its ease of use.

Implications

I’m oversimplifying, but IE Lab identified some great opportunities to improve labeling throughout the app — including the name we were rolling with (!): “ListBuilder” — as well as general usability. The overall scores on the System Usability Scale (69.25) and the Usability score (69.69) are average, while the below-average Learnability score (67.50) indicated that the design from this first prototype was more complicated and jargon-heavy than it had to be.

Early adopter interviews and defining the MVP

A user experience is holistic. One takeaway from our first usability study could be that in an attempt to make our app feature-rich and implement things folks like from a variety alternatives, we were too concerned with pomp and quantity rather than starting from the single-responsibility principle: do one thing well. We thought we’d take things a step back and identify the features that likely early adopters value.

We made a list of every single potential feature that might make it into our app — and we mean every: e.g., that a user can save their progress, that a user can see a list of opt-ins they’ve created, … — and interviewed a handful of folks we identified as early adopters. For each feature they were asked:

  1. How would you feel if [this feature] were present?
  2. How would you feel if [this feature] were not present?

For each question, our adopters could answer only in one of five ways:

  • I like it
  • I expect it
  • I am neutral
  • I can tolerate it
  • I dislike it

The results of this method — called the Kano Model — can then be mapped to a graph that measures customer satisfaction for each feature. At a bird’s-eye view, we’re then able to get a feel for a net user experience: poorly implemented or disliked features pull down the overall value.

Kano model

Common patterns emerge after just a few of these, which we then can — and did — use as a reference to inform our decision about what features our app will have with version 1.

This becomes our minimum-viable product. We are able to see which features must make the cut for it to feel complete. We can see which features most excite early adopters, and which features that — even if they’re super cool — require way more investment than we can make to pull off correctly.

Next steps

Now that we have great feedback and direction from our users about what they need and how they want to use this tool, we can focus our effort on making the minimum-viable product (or MVP) to meet their needs. Our next step is to build that MVP using the insight we gained from these user tests, then IE lab will help us test it again throughout development. These cycles repeat until the tool meets the requirements we laid out in our Kano model. Our goal is to have an MVP launched and ready for publishers to try by the fall. Thanks to iterative development and proactive user research, we can have a lot more confidence that we’re building something useful for ourselves, for our partners, and for more small publishers who are seeking to build long-term relationships with their users and a larger base of support for their organizations.

Comments

Comments are closed.