/ An inside look at the business of digital content
Unlock media innovation by experimenting every day
October 13, 2022 | By Peter Houston – Independent Media Reporter@flipping_pagesIn science, experimentation is everything. In publishing, maybe not so much.
Innovation is the go-to buzzword for ambitious publishers, but there’s no such thing as plug-and-play innovation. You can’t buy it off the shelf. I’s a process not a product. And it’s always going to be specific to your business.
For workflow specialist Kilian Schalk, experimentation is a crucial part of that process. At the Magazine Street conference in Edinburgh, Scotland, last month, New York-based Schalk emphasized the need for publishing organizations to experiment constantly if they want to get better at what they do.
“Why would you experiment every day? Well, the point is that every little incremental change can make a difference. Every time you do something you’re thinking about it. You’re being present.”
With a career stretching from Rolling Stone to the New Yorker, Schalk understands that where you start experimenting depends on what it is you want to improve in your operation. However, your pain points are the best place to begin.
“What is most urgently in need of attention within the organization, those are good places to start,” he says.
He also recommends focusing first on experiments that will bring answers quickly.
Newsletters are often a great starting point. That’s because they have a regular cadence, which makes it easy to spot trends, and they operate with the same set of known users. You can also see results quickly. And adjustments made to a single email can deliver significant results.
Optimizing tactics
Effective experimentation should generally be focused on developing and optimizing tactics in the service of a larger strategy. Look first at your strategic plan and consider the measurable targets that you are trying to reach.
Schalk cautions that, sometimes the assumptions embedded in an overarching strategy can be shown to be incorrect. While this can be painful, he says it’s good news and better discovered sooner than later.
“Implementing strategies based on incorrect assumptions will drive organizations into a ditch and kill morale as well as belief in the leadership’s ability to lead,” he warns.
Becoming aware of untested assumptions is a particular challenge for leadership with deeply held beliefs about audiences or products. For Schalk the biggest asset in identifying and challenging these is a diverse and empowered team.
“People must feel free to draw on their own experience to question prevailing beliefs or speak up to ask: ‘How do we know that?’”
Schalk believes the first step surfacing assumptions that you can experiment on is to stay focused on what you are doing and ask questions as you go. He points to methods like the Five Whys to help drill down to the root cause of a problem. It works by identifying a problem and then asking why until you arrive at the reason it’s happening and, hopefully, a potential solution.
“Experimentation is not a lot of work,” says Schalk, “It’s a lot of thinking.”
Data is your friend
Before data was a thing, instinct was everything. Editors especially relied on their gut to make the big decisions from editorial calendars to cover stars. But in the world of real-time media metrics, behavioral data is a much more reliable indicator.
“Our gut instinct sometimes will lead us in good directions. But it’s most dangerous when it is unchallenged and unverified. Discovering that your instincts are incorrect is more valuable than reinforcing previously held beliefs.”
Schalk gives the example of a client whose social media schedule was driven by their content creation workflow. The publisher always shared social posts in the middle of the day, as soon as fresh content had been published to the web. Although they were unhappy with engagement levels, they had never stopped to think about the best time to post. They assumed that audiences would simply read content when it was posted.
Once they surfaced this issue, they tested the assumption by sharing their content at night instead of at 3pm. In doing so, they found that the aggregate audience engagement for their publication was five times higher at night. When they did some research they discovered their new audience were parents and, once they’d put their kids to sleep, that’s when they went on Facebook.
Critical thinking
Schalk says that if you think critically about the nature of your problems, the potential solution is usually not far behind. He explains that “learning how to learn” is a key skill in mastering an efficient experimental mindset and that comes from thinking critically and running experiments on a day-to-day basis.
Giving things a go is very different from running formal experimentation and while there are many ways to set up experiments, the most effective will answer the following three questions:
- What has been happening?
- What do we think will happen if we do something different?
- How will we measure this?
Once the experiment has been run and the data gathered, you can then ask:
- What actually happened?
- How does what happened compare with what we thought would happen?
- What did we learn?
Schalk says things are most likely to go off the rails when a publishing team doesn’t produce a written, measurable hypothesis that they are trying to prove or disprove. They haven’t answered the question ‘What do we think will happen?’
“The human instinct is to preserve energy by not going against the flow,” he explains. “This means people wait for the results then ‘rewrite’ the hypothesis in such a way that it justifies previously held beliefs. This is especially important when it comes to assumptions that leadership holds dear.”
Outside in
A key concern in deciding what experiments to run is whether to focus efforts internally or externally.
The decision ultimately depends on the specific needs of the business, but Schalk says that in a multi-channel world, most brands want to know about their audience first.
Once a clear understanding of what your audience responds well to and where organizational energy should be directed, the focus can shift to internal experiments that will help make specific workflows more efficient and effective, delivering more or different content at the right time to the right places.
From the stage in Edinburgh, Schalk gave some examples of experiments that delivered real results.
- A weekly magazine, 100 years old, worried that their subscriber base was getting older. They assumed that their online engagement was concentrated among their oldest readers. However, they discovered that the youngest members of their audience were every bit as engaged online as the oldest. Taking this data as their starting point they experimented with a podcast targeting a younger demographic. It was a hit.
“I was supposed to have lunch with the editor, Zach, but he canceled on me because he had been invited to give a live performance of the podcast in Australia”.
- A bi-weekly newspaper alternated print editions with an email newsletter. The open rates on the newsletter were dropping and they knew they were in trouble if they didn’t change something. Instead of publishing on Thursdays, they decided to experiment with a Saturday send. The hypothesis was that open rates would increase because people might have the time to sit back and read it on the weekend. Open rates doubled.
“It took probably 10 seconds to pick a different send time on their newsletter. They just had to have the imagination to try something different.”
- A monthly magazine launched a weekly newsletter. Results were good, but they noticed a button in their email distribution service that said ‘Resend content’ which would send the email again to anyone who didn’t open it the first time. The resend increased their open rate by 12%, taking it from 39% to 51%.
“They used what I call the ‘what does this button do’ approach. Now, more than half the people that get their newsletter open it.”
Does experimentation work?
Schalk is careful to point out that not every experiment works and that there is no specific set of instructions or examples to follow. Of course, what works for some publishing organizations might not work for others.
He suggests that it’s OK to use other people’s results as inspiration. However it is important to recognize that unless you are working with your own data you won’t know what you are actually dealing with. “Your circumstances may be quite different from someone else’s. And you may get very different results from a similar experiment.”
Reassuringly, running your own experiments shouldn’t require a huge investment.
“The goal is to adjust what you are already doing in a way that allows you to learn something new. The power is to show you something you didn’t know you didn’t know.”