Almost a year ago, 10up was approached with an idea to run a lab to help evaluate, test, and establish a set of best practices for hyper-local publishers to optimize their revenue. (10up’s post here)
One of the coolest challenges of this project for me was to step away from the normal project/platform delivery processes and shape a new approach specifically for the purposes of this lab.
We knew going into it that we wanted to be efficient and focus on things that almost any publisher could implement on their own — not fancy technical features that require heavy custom engineering or expensive tools/platforms. Rather than implementing some personalized content recommendation engine, we looked for fundamental best practices like social sharing buttons,lazy loading of ads, sticky ads, AMP implementation, and more.
As we started working through this assessment we quickly got to the question of, “How should we prioritize these items if they don’t have the same level of opportunity for each publisher?”.
While there were some that were black and white like “social sharing buttons” there were a lot of other items that really depended on a mix of metrics to qualify whether or not they were worth ours or the publisher’s time to update or implement.
This is really when things started getting challenging, interesting, and fun.
I worked with my team to come up with some basic “human logic” tied to each publisher’s benchmarks. While it’s not something we could easily automate, the experience of the team allowed us to look at things like the News Consumer Insights reports and simple metrics from Google Analytics to quickly weigh each opportunity on a simple scale – similar to how you would think about t-shirt sizing in agile frameworks — except opportunity as the unit of measurement vs effort.
For example, A publisher may not have had AMP implemented but if they already had high speed mobile pages, strong rankings on mobile organic, and a decent set of rich results, we wouldn’t say it’s a “high” opportunity but would still queue it up as something to test once we had implemented the tactics with the most potential.
We took this approach to an initial cohort of publishers and saw some outstanding results in the first few months. Because we were able to develop an efficient process and leverage collaboration with the publishers, we were able to run through the process with another set up publishers to help provide more supporting data for all of our core tactics.
We launched our first case study towards the end of 2019 and have several more coming, along with a series of webinars to help provide a more interactive forum for publishers and their teams to learn more about specific topics and ask questions.
Stay tuned for the launch of the Ad Revenue Accelerator site with all of our findings and recommendations as well!