All news enterprises must continuously get better at what they do. Consider, for example, the headline rodeo at Dallas Morning News. Each morning, folks gather and use headlines as the basis for choosing stories to emphasize. But, they do more than that. They brainstorm and discuss how they might better craft headlines to generate better results. They also review results for previous story headlines – and tease out lessons about what worked well, what didn’t work well, and why.
This is what continuous improvement looks like. It is focused on performance results and learning: learning from continuously doing the work at hand in new and different ways every day. In well-managed enterprises, everyone seeks to improve continuously – every day – in the work they do. This discipline for continuous improvement includes:
- Setting clear performance goals: set specific performance goals that folks are expected to achieve through continuous improvement – that is, getting better at what they do every day (Example: Desk X commits to growing traffic and engagement by 5% each month.)
- Articulate learning objectives: expect people to identify and articulate what they are trying to learn – what hypotheses they are trying to test. Then pay attention to – and debrief and learn from – what works, what doesn’t work and why (Example: Over the next few months, Desk X will experiment and learn about the impact of “two or more story elements” – visual, traffic, audio, etc. – on traffic and engagement.)
- Be clear on the resources needed: provide folks the time and space – and in some cases cash – needed to practice continuous improvement (Example: Those of us on Desk X will experiment with ‘two or more story elements’ at least one-third of the time, seek out visual, audience and other specialists to help us, and spend at least an hour a week debriefing and learning what works.)
The nature of risks and uncertainties distinguish continuous improvement from fundamental innovation. For example, contrast Dallas’ headline rodeo with Philadelphia’s effort to use data as a bridge to better understand and act on what the newsroom and ad sales see as valuable users. Philadelphia’s effort was more fundamental than that of Dallas because the Philly folks had to:
- Figure out if the technology existed to even make the innovation possible: The data and analytical formats used by marketing differed from those used in the newsroom. In addition, getting to shared data and analytical approaches demanded partnering with third parties in entirely new ways.
- Shape and grow significant new capabilities: Newsroom and advertising marketing and sales folks all had to learn new skills, behaviors, attitudes and ways of working together. Content creators had to figure out how to make use of the insights from the data bridge in ways that supported high quality journalism. Ad sales folks had to learn how to shift from selling spaces for ads to selling audiences.
- Learn if the effort would be attractive to advertisers: Philadelphia was confident that advertisers would pay more for attractive audiences. Still, they had to find out if this would happen.
- Deal with internal questions, skepticism and/or lack of confidence about whether the effort was worthwhile: Innovations of this magnitude inevitably involve fundamental shifts in ‘how we do things around here’ – which in turn trigger anxieties about whether the time, effort and money involved will be worth it.
This Philly effort went well beyond getting better every day at what folks did. And the discipline – what it takes to get good at fundamental innovation – is more extensive than continuous improvement because it:
- Involves more serious risks and uncertainties. If Dallas folks tweak a headline and get it wrong, the story will not do so well. If Philadelphia cannot get third parties to devote the time and attention to build a useful data and analytic bridge, neither users, the newsroom, advertisers nor ad sales folks will even get the chance to see if they can make a positive difference.
- Can quickly get expensive. Time, money and valued relationships (e.g. with the third parties) can get costly fast in the Philadelphia situation whereas the headline rodeo in Dallas only redirects how existing resources are used instead of risking more of those resources.
- Takes longer. The time it takes to learn whether a Dallas headline makes a difference to traffic and engagement is a day or less. Philadelphia’s data bridge innovation was still underway nearly a year after it began.
Fundamental innovation and continuous improvement each benefit from setting clear performance goals, articulating learning objectives and being clear on the needed resources. In addition, though, the discipline of fundamental innovation demands:
- Distinguishing assumptions from knowledge. Common sense and experience indicates that proposed changes in strategy or innovation involve assumptions. For example, the Philadelphia data bridge assumes that information can be assembled that is rich and actionable enough to lead to results. Research by Columbia Business School Professor Rita McGrath indicates that assumptions like this one – assumptions that inevitably get articulated at the beginning of major efforts – are all but forgotten within six weeks of the start of such projects. Six weeks. For a set of cultural and habitual reasons, folks involved in innovation across all industries somehow, someway come to believe and act as if the articulation of assumptions turns the assumption into reality.
To avoid this, folks in your news enterprise must distinguish assumptions (things that must work yet about which you have relatively little confidence) from knowledge (things that must work and about which you are reasonably to fully confident).
Writing these down – and monitoring instead of forgetting them – is a key element in the discipline of fundamental innovation. Consider using this 4 point scale that was used in Table Stakes (as well as other programs such as Sulzberger that deploy Doug Smith’s challenge-centric, performance-and-accountability approach™):
- Pure gut, just a hunch, hopeful thinking
- A few insights, some evidence, many remaining questions
- Mounting evidence, a small number of remaining questions
- Extensive evidence and known facts, real certainty/knowledge
For example, the Philly data bridge team confidently believed advertisers would pay for qualified and more attractive audiences. They monitored this confidence – and only grew more confident as a result. In contrast, though, the team had a lot of questions and uncertainty about whether (1) the needed technology would work; (2) the advertiser information would be rich and actionable enough to yield results; and, (3) enough value would happen soon enough to keep the effort going. By writing down these assumptions, testing and learning about them, and monitoring what emerged, the team avoided the traps of forgetting the assumptions and/or magically converting them into ‘facts’ to be relied upon.
- Paying attention to the cost – not rate – of failure: Professor McGrath exhorts folks involved in fundamental innovation to fail fast, fail cheap in converting assumptions into knowledge – in the above scale, converting 1s and 2s into 3s and 4s. She has a file of costly failures (to make it into her file, a company has to have lost at least a billion dollars on an innovation that failed). In one example, executives at an industrial chemical company imagined using their chemical know how to enter and dominate the field of women’s fashion. Industrial chemicals and women’s fashion: what do you think? Chemical compounds are essential to fibers and threads. Still, it’s not the most obvious connection; this innovation for the chemical company depended on many profound uncertainties. Yet, instead of cataloguing – then quickly and cheaply testing – the core assumptions, the company managed to invest more than a billion dollars before dropping the idea. Put differently, the chemical company fell into the trap well characterized by the famous line from the movie Field of Dreams: build it and they shall come.
- Defining – and using – phases and “traffic lights” to manage/monitor progress of fundamental innovations: Because the time horizons for fundamental innovations can stretch over several months or longer, it’s wise to define a series of phases with varying degrees of expectations and permitted resources and risk taking. The risks, resources and expectations for the first phase of any fundamental innovation are much more circumscribed than, say, the last phase when an enterprise is seriously considering rolling out something new. Separating these phases are “traffic lights”: a formal evaluation point where innovations get reviewed and either approved for the next phase’s greater risks, resources and expectations (a green light), rejected and stopped (red light) or returned to the previous phase for one more try (yellow light).
You must define the number of phases plus expectations and allowable resources and risks for each phase. Three or four phases usually suffice. And, within each phase, you must define what are the expectations for what is to be accomplished. For example, phase 1 expectations might include failing fast and failing cheap at the most critical assumptions (the 1s and 2s in the scale mentioned above). Teams pursuing a fundamental innovation in phase 1 might be given limited resources and also told not to take any significant risks that could affect the brand. In this case, at the end of phase 1, only teams that had converted 1s and 2s into enough knowledge and confidence would be permitted to move to phase 2 (that is, given a green light).