- This Week in Review: Flipboard scoops up Zite, and Getty sets its photos free (kind of)
- Q&A: Ellen Miller on the Sunlight Foundation’s role in increasing the availability of open data
- Getty Images blows the web’s mind by setting 35 million photos free (with conditions, of course)
- Embrace the unbundling: The Boston Globe is betting it’ll be stronger split up than unified
- The New York Times’ football bot is ready for its offseason training regimen
- Interviewing the algorithm: How reporting and reverse engineering could build a beat to understand the code that influences us
- On Adele Dazeem, Slate, and editorial ambivalence: “Our readers go low with us, and they go high with us”
- The newsonomics of Newsweek’s pricey relaunch
- ProPublica opens up shop with a new site to sell custom datasets
- Q&A: ESPN’s Henry Abbott on TrueHoop, serving readers, and the future of sports blogging
- A static website is a vending machine. A dynamic website is a restaurant.
- What’s New in Digital and Social Media Research: What happens when robot journalists produce stories that are “good enough”
- How to build your own Twitter bot army
- Hacking in the newsroom? What journalists should know about the Computer Fraud and Abuse Act
- Warren Buffett on newspapers: “___________”
- 12 Things BuzzFeed’s Ben Smith Thinks You Should Know about Journalism
- Berkeley dean: Teaching hospital model isn’t “some template that you simply apply and follow”
- This Week in Review: Making sense of the Comcast/Netflix deal, and an FCC study takes heat
- J-schools: Success in news today is about a lot more than reporting and writing
- Cut loose by UC Berkeley, hyperlocal site Mission Local looks to spin off as a for-profit
- The newsonomics of the print orphanage — Tribune’s and Time Inc.’s
- The new Knight News Challenge focuses on strengthening the free and open Internet
- The Boston Globe offers newspaper delivery in Florida
- All Alternatives Considered: How Slate thinks a daily podcast can fit into your evening commute
- Come work for Nieman Lab
- New U.S. guidelines on protections for journalists aren’t clear on who they apply to
- The plague of uniform rectangles with text overlays spreads further, risks becoming news-web-wide contagion
- Can the quote be a new atomic unit of news? Rookie (no, not that one) tries a new twist on sports
- National Journal opens a doc library for subscribers
- The Comcast/Netflix deal: Twin looks in the rearview and at the road ahead
FLIPBOARD BUYS ITS RIVAL FROM CNN: Flipboard, the most prominent of the many social reading apps, bought one of its rivals, Zite, from CNN this week. CNNMoney's Laurie Segall pegged the deal at a value "as high as $60 million over time," taking future advertising revenue into account. The dollar amount was strangely vague because, as Bloomberg's Edmund Lee reported, CNN didn't actually get any cash in the deal, but instead got a small stake in Flipboard. The two companies also announced a content and ad-selling partnership. There's no mystery in this case as to whether Zite will survive the acquisition: Its co-founder, Mike Klaas, said Zite will shut down, with its technology folded into Flipboard. Klaas said both companies have been focused on helping people discover personalized content, though they diverge in how they do it: "ZITE’S FOCUS WAS ON TOPICS, WHILE FLIPBOARD WAS MOSTLY ABOUT PUBLICATIONS; FLIPBOARD WAS A READER FOR SOCIAL MEDIA BUT ZITE TRIED HARDEST TO FIND ARTICLES YOU _COULDN’T_ FIND ON YOUR TWITTER FEED." ReadWrite's Dan Rowinski also said Flipboard will benefit greatly from the serendipity of Zite's recommendation technology, and Zite CEO Mark Johnson — who won't be joining Flipboard in the merger — agreed with him that adding Zite's back-end technology to Flipboard would be best for both apps, even though it meant the death of his own app. Zite's biggest shortcoming seems to have been not its technology, but its relative lack of size. J.P. Mangalindan of CNN's Fortune reported that Zite never got the kind of traffic bump it had expected when it was bought by CNN. The social reader field now looks like it's dominated by Flipboard and Facebook's new Paper app, though Flipboard's Mike McCue told TechCrunch he's not worried about Paper as it's simply "a different way of letting you look at Facebook." Rachel King of ZDNet agreed that Paper is too narrowly drawing from Facebook content to be considered a "Flipboard killer."
THE GLOBE AND GUARDIAN'S DIFFERING PAYWALL PATHS: The Boston Globe announced that it's turning the hard paywall at its BostonGlobe.com site into a metered model, a change that was reported to be in the works last month. BostonGlobe.com was created in 2011 when the Globe split into two sites, one free (Boston.com) and one paid (BostonGlobe.com). In a memo to staff, the Globe's editor, Brian McGrory, deemed the hard-paywalled BostonGlobe.com a success, with nearly 60,000 digital-only subscribers. The switch to a metered model (which allows readers 10 free stories over 30 days), he said, is simply an attempt to grow its readership.
BostonGlobe.com is moving to a metered paywall
McGrory said the two sites will now be completely separate, even competing; Boston.com will no longer publish anything produced by Globe staff, and Boston.com's staff will move out of the Globe newsroom. The Lab's Justin Ellis looked more closely at the split between the two sites and the rationale behind it, and new Globe owner John Henry explained the split and his other plans for the paper in emails to Boston magazine. Northeastern journalism professor Dan Kennedy said the change isn't a major shift but a rather expected course correction, though he protested the two-site strategy, calling it confusing. Why, he asked, should paying BostonGlobe.com customers have to go to Boston.com for anything? In a followup post, Kennedy said the Globe seems to be going with a hub-and-spoke model with a variety of affiliated projects, as opposed to a strict two-site model. The Guardian, meanwhile, continues to move in the opposite direction regarding paid content online. Fresh off the announcement that the paper's digital revenues have jumped 25 percent in the past year and the £619 million sale of its share in AutoTrader, Guardian CEO Andrew Miller told a conference crowd that the chance at a paywalled site has long passed. Instead, he said The Guardian will be looking into membership models over the next few months. Mike Darcey, head of the News Corp unit that includes hard-paywalled Times and Sunday Times of London, backhandedly called The Guardian "very brave" for betting on a free-site strategy. The Columbia Journalism Review's Ryan Chittum commended The Guardian for its advertising gains but said it's still leaving money on the table without a metered paywall. "You can set a meter as high as you want, and even then you could simply ask for money, rather than require it," he said. The New York Times, which pioneered the metered model, continues to tweak it: It announced it's working on NYT Now, a mobile-oriented briefing of Times stories that will cost $8 a month. NYT Now is the first of several other paid offerings the Times is planning to offer that run a little cheaper than the main Times digital subscription plan. GETTY IMAGES' FREE EMBEDDED PHOTO PLAN: Getty Images, one of the world's largest stock photo agencies, announced this week that it's making 35 million of its photos free to embed for noncommercial purposes, as the British Journal of Photography reported. Getty has a surprisingly lenient measure of what makes up noncommercial use, including news sites that use photos for editorial purposes and blogs that serve Google Ads as noncommercial, provided the photos aren't to promote a product or service. (All except the top photo in this story are Getty embeds.) As the BBC noted, Getty is admitting defeat with this policy: It knows its pictures are being used illicitly all over the web, and it's conceding to that free use while trying to retain some of its revenue by gathering data from the sites on which it's embedded and (possibly, eventually) serving ads with the pictures. Gizmodo and Techdirt — the latter of which has been quite critical of Getty's copyright enforcement efforts — both praised the move, while the BBC pointed out that many professional photographers are livid.
Embrace the unbundling: The Boston Globe is betting it'll be stronger split up than unified
The Verge's Russell Brandom compared the move to the music industry's efforts to adapt to downloading while keeping control and worried about long-term link rot with embedded images if Getty's terms change. The Lab's Joshua Benton had the most in-depth look at Getty's plan, outlining the shortcomings of its embedding format and characterizing its possible strategy as "(A) GET SOME PEOPLE TO USE AN EMBED INSTEAD OF STEALING WHILE (B) MAKING THE EXPERIENCE _JUST_ CLUNKY ENOUGH THAT PAYING CUSTOMERS WON’T WANT TO USE IT." RUSSIA'S MEDIA OFFENSIVE: Russia's invasion of Crimea and the growing international tensions surrounding Ukraine have dominated the news this week, and as The Huffington Post's Michael Calderone and Luke Johnson reported, Russia has been waging a media offensive alongside its military moves. Crimea's only independent TV channel shut down after threats to its journalists, and masked gunmen seized the offices of the Crimean Center for Investigative Journalism. Both actions drew condemnation from the Committee to Protect Journalists. In Russia, state-run media has been enlisted in the propaganda effort, as Calderone and Johnson, The Daily Beast's Oleg Shynkarenko, and BuzzFeed's Rosie Gray all documented the spin and misinformation being aired there. One anchor on the Russian government-funded TV network RT America, Liz Wahl (an American), drew attention by resigning on air in protest. RT America condemned her resignation as "nothing more than a self-promotional stunt" while praising another RT host who criticized the Russian government on her talk show; its editor-in-chief followed up with a post expressing pride in her RT's journalists for withstanding the criticism they go through in the U.S. Wahl, for her part, criticized RT America in an interview with The Daily Beast for marketing the network to appeal to a young Westerners cynical about political authority while downplaying the fact that it's run by an extremely authoritarian organization in the Kremlin. The New York Times talked to a former RT staffer who backed up her claim of covert editorial influence in favor of the Russian government. The New York Times talked to experts who said this sort of misleading selectivity is the norm, not just during wartime, but also noted that the U.S. media takes a pretty pro-American perspective as well, something RT representatives and experts also told Mashable. READING ROUNDUP: Lots more news going on this week in the media world beyond the big headlines:
Getty Images blows the web's mind by setting 35 million photos free (with conditions, of course)
— Time unveiled a redesign that includes new, more dynamic and interactive ad units and, as Poynter's Sam Kirkland pointed out, a more text-heavy, app-like feel. Tech entrepreneur Chris Saad tweaked sites like Time's for web design that mimics iPad design. Meanwhile, Time's (former?) rival, Newsweek, relaunched its print edition this week with an attention-getting cover story revealing the identity of Bitcoin founder Satoshi Nakamoto that led to a denial, outrage, a car chase, and an ethical debate. The New York Times outlined Newsweek's plans — it's printing 70,000 copies, available for a whopping $7.99 each — and the Lab's Ken Doctor went deeper into its print-web subscription strategy. — Federal attorneys in Texas dismissed 11 charges against Barrett Brown, who had been accused of trafficking in stolen data for posting a link to the data online. The case has drawn attention from free speech advocates, and the Electronic Frontier Foundation praised the decision to drop the charges. Earlier in the week, Brown's attorneys had filed a motion to dismiss his indictment; you can read Techdirt's commentary on that document here. — As First Look Media and Glenn Greenwald's The Intercept build on their non-hierarchical, collaborative editorial model, PandoDaily lobbed a conflict-of-interest accusation revolving around funding its owner, Pierre Omidyar, has given to pro-democracy groups in Ukraine. Greenwald responded with a defense of his and The Intercept's journalistic independence, and PandoDaily issued another reply. The Washington Post's Erik Wemple summarized the conflict with some rebukes for both sides, and CUNY's Jeff Jarvis looked at the episode as a case study in the new ethics of philanthropy-based journalism. — The trial of three Al Jazeera journalists who have been detained in Egypt for several months began this week, with the defendants claiming they've been tortured and the prosecutors presenting, according to The Guardian, farcical evidence from the journalists' hotel rooms. The trial has been postponed until March 24.
The newsonomics of Newsweek's pricey relaunch
— Just like The New York Times in December, Slate hit a traffic high this week with a non-article — a name generator inspired by John Travolta's Oscar-night flub. The Lab's Joshua Benton delved into the ambivalence around non-news content drawing huge traffic at a news site, while the Times explored it as an instance of the gamification of news. — Three thoughtful pieces to chew on: Dean Starkman of the Columbia Journalism Review's "new consensus on the future of the news" manifesto, UNC professor John L. Robinson reflected briefly on the pros and cons of his students' social media-heavy news diets, and the Tow Center for Digital Journalism's Anna Hiatt went deep (naturally) into the future of online longform journalism.
On Adele Dazeem, Slate, and editorial ambivalence: "Our readers go low with us, and they go high with us"
For the past eight years, Ellen Miller has been one of the nation's most prominent activists for open government and open data. She's executive director of the Sunlight Foundation, which she started in 2006, and last week she announced she would be retiring at the end of the year. Miller told me she was looking forward to some time off, and assured me she had no other plans after previously starting two other nonprofits. ("I'm not running for president," she joked.) But the impact of her years at Sunlight is already clear, in the much more central role open data and transparency has taken in how we talk about government and journalism today. Miller and I discussed her accomplishments and what she sees as the future of the open data movement. Here's a lightly edited and condensed transcript of our conversation.
JOSEPH LICHTERMAN: So why retire now?
ELLEN MILLER: The reason is that Sunlight is now an eight-year-old organization. It is in its strongest condition that it's been in since its founding not very many years ago. But I felt from a financial and managerial and vision and role perspective that if I stepped away at the end of this year, I could not leave it in a better place. So I felt the institution was strong enough for its founding director to be able to move on.
LICHTERMAN: Why do you think it's in the strongest place right now? What is going so well that puts the foundation in such a strong position?
MILLER: Well, Sunlight has played a pretty unique role in as a catalyst for the open data policy changes that are happening, not only here in the U.S. at the federal level, but also at the state and municipal level and globally — and that has been one of our huge successes to serve as that catalyst. We sort of invented the word transparency as meaning 21st-century-style disclosure, and now it's become a cultural norm that in order to have participatory democracy in which accountable government is a watch word. You have to have access to information, you have to have access to data and the default has to be open.
It's not to say we have reached the state or the condition where everything both from the process and from a data and information side is open, but we have reached the state where it's hard to deny that that is the best policy we should have. There are lots of transition issues, of course. There's lots of political resistance to the idea because information is power, and that's why we demand that it be in the public arena and why some people don't want it to be in the public arena. But that ball is rolling down hill and it's a pretty unstoppable one. I think Sunlight has been very instrumental in that. Another tremendous strength of ours is that we have pioneered making it easy to use the data — particularly in the tech world via APIs, technical interfaces on data. We've shown others how to do that to power their own advocacy and apps. And we've just seen tremendous growth and interest in that among the technical community. Another example I think of as a strength of ours in the journalism sphere has been this tremendous demand that we have nurtured for training on the kinds of tools and databases we have created to make the hard data journalism work easier for those who practice it. Not every reporter in every newsroom has to go and create a database. They can use some of the generic ones we've done around congressional legislation, or money in politics, the connections between political contributions and government grants and contracts, or databases we have created that look at regulatory matters and allow you to do an analysis of who is really commenting on regulatory matters or following particular dockets. So we've had tremendous demand. We've trained thousands, if not tens of thousands, journalists on those tools and we have also seen a growth and a recognition that many many outlets are hiring their own data journalists, because they understand the importance of that capacity internally — including hiring some of ours away. One more piece too, of our strengths, is the policy norms that we have established for what open data looks like generally, and what open parliamentary data looks like, is now appreciated at all levels of government. So we work with municipalities, with NGOs in other countries, we work with the open government partnership because of the kinds of standards we have set for what these policies should look like have really become the norm and people are working from those. That's a very important strength of Sunlight's from the policy side.
LICHTERMAN: You're saying this has become more like the norm now. What was it like in 2006 when you started? Why did you think there was a need for this type of advocacy?
MILLER: A lot of government data was either not available or was available, you know, between the hours of 2 and 6 in the basement of some government building that had a sign on it: “Beware of the lion.” We redefined what 21st-century transparency looks like, and that means it's online and accessible to anyone. It means it's online in machine-readable formats. And we said that is the standard — that is our bottom line: Make the data available in machine-readable formats online. Of course, we would like to see more timely collection and publication of that data. But making it more accessible and recognizing the fact that this thing called the Internet exists, and it's a perfect tool making information available without prejudice, and really redefining what disclosure means in the 21st century, which means it's available online, electronically.
LICHTERMAN: That seems like a really radical change, a big shift in thinking.
MILLER: It is. But when Sunlight was founded, we were already getting used to ordering a book, or downloading a movie, or ordering a pair of shoes and having them the next day. And so that larger culture shift about the impact that technology was having on our lives seemed to be a natural to apply to government and how we interact with government. Whether it's citizens who want to follow the legislation that might affect them via their city council, or via their state legislature, or in the federal congress. Whether it's something that's fairly high level or people just really want to know what the bus schedule is for their local neighborhoods so they know how long they have to stand outside in the freezing cold. The importance of data is just sort of like: I need to know that, and I need to know it now. That is the 24/7 culture in which we began living in maybe before 2006, but certainly we are dead center in it now. Governments are recognizing that people expect to get the information that way, and they need to be able to provide it that way.
LICHTERMAN: What do you think journalists' role is in all of this?
MILLER: Journalists bring tremendous, tremendous capacity to telling the story behind the data. So the data is data, right? There's no story there. The journalist's job is still to connect the dots. To see if the largest campaign contributors to a state legislator are getting bills and votes according to the people they contribute to. The same thing at the federal level. To examine whether a city council member's campaign is actually funded by someone who gets a zoning exemption, or how legislation is skewed in the Congress. It's a journalist's core responsibility, I think, to tell that story in a way citizens can understand it. It could be deep-dive investigative work, but it could be just as simple as saying, "Wow, 1 + 1 sure looks like it adds up to 2 to me." So journalists remain a critically important link in the data story. Releasing every piece of data that the government collects might have absolutely no impact at all if it weren't for journalists.
LICHTERMAN: There's been such a shift in the scene regarding open data and so much change going on, where do you foresee this going in the future?
MILLER: I think the genie is out of the bottle in terms of open government data — meaning accessibility online. So I think that will continue. There will be more and more data, and more easy accessibility to it, which means more people will use it. Some for business, some for just pure public consumption, and I think it has the potential to lead to a more engaged and aware public.
There are many, many challenges around my desire for engagement, not the least of which is the things that people build — that developers build things people want to use. We have to be more conscious about designing apps and uses of this data. It's not necessarily Sunlight's role, though we do some of this and we are much more conscious than we were certainly three or four years ago when we build something about how people want to use it and what do they want to do with it and what is the information they want to know. I think we have to be better about reaching beyond the choir. The choir is huge, if you look at some of the Pew numbers about people who are online seeking political information. I haven't seen any numbers in maybe the last eight or nine months, but at one time the figure was something like 20 million people who were online seeking political information. That's a big, big group of people. That has to be enlarged. And it can be, because of the distributive nature of technology and the way people use text messaging and other sorts of mobile apps in monitoring elections, et cetera. The potential is there to be explored that could conceivably result in a much wider, more fully engaged body politic beyond elections. People getting involved in various kinds of activities and technology enables that.
LICHTERMAN: Should it be Sunlight's role to take that on and work to achieve these goals?
MILLER: Sunlight, I think, will continue to focus on the front end of things, which is the working on the policy to make sure the data actually does get released. We've had lots of situations here — I've dubbed them transparency theater — where the administration says they're going to release something. For instance, they issued an executive order last March. It came due in December. All the agencies were supposed to make a list of all their publicly held data. And they did, supposedly, and that list is not public. So, we FOIAed for it. It's one step forward and two steps back, so there is a tremendous amount of work still to be done on the policy side. There's a tremendous amount of work still to be done on working with government in terms of using the most sophisticated but simple ways of technical interfaces on how to release data. There's a tremendous amount of work still to be done in designing apps, whether they're being designed in the private sector or by government. So all of that is a precursor to this vision of a much more fully engaged body politic, brought to the political system via technology. For example, Sunlight is in the business now of refining the most successful apps we have. In our first year to 18 months, we developed 18 websites or apps. We were just throwing them out there and seeing what would work. We didn't know what would work. Now we're more serious, and we do much more human-centric design, and we kill things that really are not working, even if it's somebody's favorite but nobody else likes but that one user. Gone. That includes me. So we're really digging into learning what works and being much more sensitive about design so that we can dig deeper and get more sustained use of things.
LICHTERMAN: What's next for you?
MILLER: I fully intend to retire! This is actually not a joke. Sunlight is the third nonprofit I've founded. I'm the founder of the Center for Responsive Politics and the founder of Public Campaign. I've worked really, really hard. Actually, one of the just thrilling things of my announcement is how many people have written to me and said "You gave me my first opportunity," and now they're off in some senior position someplace. So I really am looking forward to not having a day job. And I have no other plans. And I'm not running for president._Photo by Open Knowledge Foundation used under a Creative Commons license._
Hey, look, it's some boiled crawfish: And the great Creole fiddler Cedric Watson: And a stock photo of a professor in a classroom: And Walter Lippmann lecturing in 1952: Those photos are all from the esteemed Getty Images — a place we at Nieman Lab and thousands of other publishers have paid good American money for the use of their photos. And yet I'm not blowing through Harvard's budget by putting those four photos up there. I'm legally and ethically publishing them all here because Getty has, remarkably, decided to allow _35 million_ of its images to be used for free for noncommercial purposes. The British Journal of Photography has the story — it's an attempt to deal with widespread unauthorized posting:
“We’re really starting to see the extent of online infringement,” says Craig Peters, senior vice president of business development, content and marketing at Getty Images. “In essence, everybody today is a publisher thanks to social media and self-publishing platforms. And it’s incredibly easy to find content online and simply right-click to utilise it.” In the past few years, Getty Images found that its content was “incredibly used” in this manner online, says Peters. “And it’s not used with a watermark; instead it’s typically found on one of our valid licensing customers’ websites or through an image search. What we’re finding is that the vast majority of infringement in this space happen with self publishers who typically don’t know anything about copyright and licensing, and who simply don’t have any budget to support their content needs.” To solve this problem, Getty Images has chosen an unconventional strategy. “We’re launching the ability to embed our images freely for non-commercial use online,” Peters explains. In essence, anyone will be able to visit Getty Images’ library of content, select an image and copy an embed HTML code to use that image on their own websites. Getty Images will serve the image in a embedded player – very much like YouTube currently does with its videos — which will include the full copyright information and a link back to the image’s dedicated licensing page on the Getty Images website.BJP argues that the move "has single-handedly redefined the entire stock photography market," and while I think that's a slight overstatement, it's nonetheless quite significant. Go here and start searching (look for the > symbol to see which photos are embeddable) to see what you can find. A few thoughts on this big move: THE COLLECTION IS HUGE, BUT IT HAS BIG HOLES. Not every image you'll find on Getty can be embedded, and from my initial searches, the share of editorial/news images available seems much smaller than the share of traditional stock photos. Go search for "Obama" and you'll find a gazillion photos, but I didn't find too many that could be embedded, like this one: If you need a purely illustrative photo — something to communicate the idea of "hotel room" or "pulled pork sandwich" or whatever — it seems you're more likely to find something. But if you're looking for photos from this morning in Crimea, you're likely to have a harder time. For the online news organizations that already have licensing agreements with Getty, this new embeddability (?) isn't likely to change the need for them. (Sports photos seemed more frequently embeddable than straight news — again, just from some initial poking around. Here are University of Louisiana point guard Elfrid Payton — you'll see him in a couple years at the next level! — and New Orleans Saints quarterback Drew Brees.) THERE'S A TROJAN HORSE IN THE LEGAL LANGUAGE. Getty's not doing this out of the good of its heart. It recognizes that images on the Internet are treated as _de facto_ public domain by many people on social networks, blogs, and the scummier parts of the content web. It knows it's highly unlikely to ever get significant money out of any of those people. Even you and I, upstanding Internet citizens, are unlikely to license a photo to tweet it to our followers. So if it can (a) get some people to use an embed instead of stealing while (b) making the experience _just_ clunky enough that paying customers won't want to use it, Getty could eke out a net win. (More on that second point below.) What does Getty get from the embed? Better branding, for one — the Getty name all over the web. Better sharing, for another — if you click the Twitter or Tumblr buttons under the photos, the link goes to Getty, not to the publisher's site. But there are two other things Getty gets, according to the terms:
Getty Images (or third parties acting on its behalf) MAY COLLECT DATA RELATED TO USE of the Embedded Viewer and embedded Getty Images Content, and reserves the right to PLACE ADVERTISEMENTS IN THE EMBEDDED VIEWER OR OTHERWISE MONETIZE its use without any compensation to you.Aha! The data collected could have internal use (measuring what kinds of images are popular enough to invest in more stock photos, for instance). But they could also help with those ads. Imagine a day, five years from now, with Getty photo embeds all over the web, when they flip the switch — ads everywhere. Maybe there's a photo equivalent of a preroll video ad and you now have to click to view the underlying image. Or a small banner on the bottom 90px of the photo. And imagine your website has used a lot of Getty embeds over the years — enough that Getty can actually sell ads _specifically targeting your website_, using all that data it's gathered. Or imagine there are enough Getty embeds that it could sell ads only on photos of Barack Obama, or only photos about Cajun music, or only photos about restaurants in Kansas City. You can start to see the potential there. Think of how many YouTube videos were embedded on other websites before Google ever started putting ads on them. To get to that potential, Getty needs to have its photos everywhere. If it's already accepted that it won't make money with these small bloggers and publishers via licensing, why not use them as a Trojan horse? Who knows if it would ever come to that, but it's a possibility specifically outlined in the terms of service. GETTY'S DEFINITION OF "NONCOMMERCIAL" IS BOLD.
In order to get to that kind of scale, Getty allows "noncommercial" use. But the Internet has never been able to decide what "noncommercial" really means. If you're selling a photo for profit, sure, that's commercial. If you're using it in an ad for your product, sure, that's commercial. But what if you're using it on a website that has ads — is that enough? Or how about if you're a freelancer and you're using it on a site meant to promote your career — is that commercial?
Wired releases images via Creative Commons, but reopens a debate on what "noncommercial" means
Longtime readers may remember that, in 2011, Wired released a set of its photos under a Creative Commons license. Their definition of noncommercial allowed "editorial use by bloggers or any other publisher," including those that had ads on them. As a publisher (even one without ads!), I like that broader definition — but it's not the one that most Creative Commons users prefer. (Much, much more about that here and here.) What "noncommercial" means is something Creative Commons has never really been willing to take a clear stand on. (Imagine some extremely hypothetical future day when we put ads on Nieman Lab. Do all the CC photos here become a rights violation or not?) In any event, Getty is clear that its definition of noncommercial is closer to Wired's than to the typical Creative Commons user's. Here's how it puts it in the terms of service:
"Consumers of Creative Commons licenses do not understand them": A little more context to Wired's use of CC
You may only use embedded Getty Images Content for editorial purposes (meaning relating to events that are newsworthy or of public interest). Embedded Getty Images Content may not be used: (a) for any commercial purpose (for example, in advertising, promotions or merchandising) or to suggest endorsement or sponsorship; (b) in violation of any stated restriction; (c) in a defamatory, pornographic or otherwise unlawful manner; or (d) outside of the context of the Embedded Viewer.As Getty told BJP (emphasis mine):
Blogs that draw revenues from Google Ads will still be able to use the Getty Images embed player at no cost. “We would not consider this commercial use,” says Peters. “THE FACT TODAY THAT A WEBSITE IS GENERATING REVENUE WOULD NOT LIMIT THE USE OF THE EMBED. What would limit that use is if they used our imagery to promote a service, a product or their business. They would need to get a license.” A spokeswoman for Getty Images confirms to BJP that editorial websites, from The New York Times to Buzzfeed, will also be able to use the embed feature as long as images are used in an editorial context.THE EMBED IS KINDA CRUMMY. There are at least three significant problems with it from a publisher's point of view: — The embed, by default, has no way to resize to different dimensions. (Unlike, say, the YouTube embed, which can be quickly resized to whatever size of content well you'd like.) You can change it manually if you'd like by fiddling with the width and height of the embed's iframe, but that (a) takes math to derive the height from the desired width and (b) isn't even simple math, because the credit area underneath the photo means that the height/width ratio of the photo isn't the same as the ratio of the embed. You basically have to change the width and then manually eyeball the height until it looks right. I know, #firstworldpublisherproblems — but it makes the process less friendly.
— The embed is not, by default, perfectly responsive. So if your site is meant to adjust responsively from phones up to desktops, your embedded Getty image won't always adjust with it. There are probably workarounds, as there are for Instagram's similarly unresponsive image embeds or YouTube's, but again, it's a pain. — Being restricted to an embed means that the photo can't travel with the post. For instance, I'd love to use one of the lovely Getty photos in this story as what WordPress calls its "featured image" — meaning the photo that will show up when this story is on our homepage or in a search result or on my author archive page. But I can't do that with a remote embed — I can only do that with an image that lives on the Nieman Lab server. For Joe Blogger, none of those are likely a deal killer. (It's even less of a deal for his neighbor, Joe Spam Blogger.) But those guys are probably comfortable just stealing the image directly anyway. I think these technical issues are enough of a roadblock to keep Getty embeds out of nearly any major publisher's regular workflows. Also, a technical note: The way the embeds are set up, it's trivial to resize the iframe to eliminate the Getty Images credit and sharing tools at the bottom. For instance, here's that same photo of Cedric, only this time resized to cover the credit: Looks a lot like I paid for that photo now, doesn't it? (I'm no lawyer, but I don't even see how this violates the terms of service around embeds.) (UPDATE, 7:50 P.M., MARCH 6: Interesting! Getty appears to have changed how its embed works to combat people hiding the credit and sharing tools. Now Cedric doesn't take up the full width of the iframe if the height is too short and the credit is force-displayed.) I'M NOT SURE THIS IS A "REDEFINITION" OF THE STOCK PHOTO MARKET. There's no doubt that this will further increase negative pricing pressure on the stock photo market. But that negative pricing pressure has been around for _years_. Ever since 2000, when iStockphoto burst onto the scene and radically undercut the existing competition, which was charging many multiples of iStock's price. In 2006, iStockphoto, the great undercutter of pro photography business models, was bought by… _Getty_. (In other words, these guys understand disruption in the photo business.) This move requires uptake, but the _right kind of uptake_. Ideally, it would generate new value among the web scofflaws while not harming Getty's business with pro publishers. I'm not sure these embeds hit that balance. The workflows are too ungainly for the people who currently have contracts with Getty, true, but they're also not quite easy enough to be a good substitute for people who don't mind stealing. My wager is that, as transformational as this announcement might seem to be, Getty's embeds won't be pockmarking the web. But no matter how it turns out, give Getty a lot of credit for being willing to take a highly unorthodox stance. It's an effort very much worth watching.
Instagram embeds are here, but not quite perfect for publishers
The model for the 20th-century American newspaper was to be all things to all people, in one amalgamated package. One daily bundle of newsprint could give you baseball box scores, reports of intrigue in Moscow, gardening tips, an investigation into city hall, and Calvin and Hobbes. The Boston Globe is betting that the model of the 21st-century American newspaper won't be a single digital product — it'll be a _lot_ of them. Since 2011, the Globe has been perhaps the best known proponent of what's been known as the two-site strategy: a paid site that includes the newspaper's top journalism (BostonGlobe.com) and a free site that aims to be more realtime, quick-hit, and webby (Boston.com). It's a strategy that hasn't been without problems — ask your average Bostonian to describe the difference between the two sites and you might get a confused look — but the Globe's plan for the future is to separate the sites even more, not to bring them back together. And they'll be spawning still more digital offshoots along the way — each free-standing, even battling its own siblings where appropriate. "We think there is plenty of room for both of these sites," Globe editor Brian McGrory told me. "Our intention is they compete like crazy with each other." Of the numerous changes the Globe's announced this week, perhaps the most important is the replacement of BostonGlobe.com's hard paywall with a meter — up to 10 free stories a month. The tradeoff is that Boston.com will no longer publish any content generated by the Globe's staff: not just newspaper stories but also staff blogs, chats, and more. With the exception of video, which the sites will still share, everything else moves over to BostonGlobe.com. The hope is that'll make the distinction between the two clearer, and that the meter will mean that the webbier parts of Globe-produced content will still have a chance to reach a non-paying audience.
For Boston.com, the separation is physical, too: Its staffers are moving out of the Globe newsroom into their own space, essentially making the two sites roommates. (Albeit roommates who'll be moving soon; new owner John Henry has said he plans to sell the company's Morrissey Boulevard property.) A Boston.com redesign, in the works for several years now, is finally approaching launch, on mobile by the end of March and on desktop in April. McGrory told me the new strategy should help the sites become more distinctive, which in turn should drive an increase in audience. "We're hoping obviously that it boosts traffic significantly," he said. "But let's be clear though: The paywall worked. It did exactly what we wanted it to do." The company increased its digital subscribers from 28,000 in 2012 to 47,000 in 2013 to "nearly 60,000″ today. McGrory said a metered paywall will help grow the audience by giving potential subscribers a sample of the Globe's work. "Essentially we're doubling down the bet on our own journalism," McGrory said. And the two-site strategy is just the beginning. On Monday, the company launched BetaBoston.com, a site focused on the local tech and biotech industries. Last fall it launched BDCwire, an arts and entertainment site aimed at younger readers. And this spring, the Globe plans to debut a site covering Catholicism anchored by reporter John Allen. "You've seen a lot of organizations over the past few months focus on niche verticals. That's something we believe strongly in," said David Skok, the Globe's new digital advisor (and, full disclosure, a recent Nieman Fellow). Michael Morisy, editor of BetaBoston (and a former Lab contributor), said the site has the potential to amplify the work the Globe has already been doing on innovation, from startups in Kendall Square to biotech companies. (It's a space with some competitors already in place, like BostInno and Xconomy.) Operating independent of the Globe (though it'll include contributions from Globe staffers) means BetaBoston can build an audience of people who might not regularly read the Globe, Morisy said. "We aren't just covering startup funding or apps: We're covering things more broadly," Morisy said. "Internally, I refer to our domain as how innovation is enhancing greater Boston and how Boston innovation is changing the world." Similarly, McGrory said the Catholic site has a strong potential readership both in the Boston area and beyond. "Here in Boston, we know we have a pretty Catholic state and a pretty Catholic region. We think there is a reasonably good fit there," he said. Expect more to come. While he wouldn't say what the next Globe digital franchises might be, McGrory said he thinks the three key sectors in Boston are technology, health care, and education. If BetaBoston is successful, those other two could be the next targets. "My every hope is to use that for a template for what we do in the other concentrations next," McGrory said. In the broader sense, the Globe wants to build its fortunes off a collection of distinct audiences rather than one monolithic one: dedicated news hounds going to BostonGlobe.com, information grazers at Boston.com, younger readers and music lovers at BDCwire, and the tech-inclined at BetaBoston.
It's part of continuing change at the Globe under Henry, which has also included new subscription enticements in print. But the digital moves are also a course correction on the company's digital strategy. In the two years since the Globe divided its digital presence, BostonGlobe.com has allowed the company to grow a digital subscription business from scratch — an important new source of revenue in an uncertain environment for online advertising. But the brand confusion between the two sites has been a persistent problem and the relaunch of Boston.com has been a work-in-progress for years.
The Boston Globe nicks a page from the OC Register playbook and pushes subscribing as a civic good
In an interview last year, McGrory told me, "I'll be blunt: Right now, it's a bit of a muddle between the two sites. There's not enough of a distinction between what is BostonGlobe.com and what is Boston.com." The fate of Boston.com is something that is also clearly on the mind of Henry. In his profile of the publisher last month, Jason Schwartz of Boston Magazine wrote: "The vision for the new Boston.com is to be, as Henry puts it, 'a phone-first website' and totally independent of the Globe — something like a mixture of the Huffington Post with BuzzFeed."
Split in two: How The Boston Globe, up for sale, is navigating its free/paid strategy
Fully separating out the teams behind the sites, more than simply making changes to the content or design, might have a greater impact on the fortunes of Boston.com and BostonGlobe.com. In a conversation here at the Nieman Foundation last year, Harvard Business School professor Clay Christensen (on stage with a pre-Globe Skok) was asked about his thoughts on the Globe's two-site strategy. You can see Christensen respond when told that the staffs of Boston.com and the Globe proper weren't as distinct as one might hope: "Oh my gosh." When I spoke to Skok on Tuesday, he said he's planning to apply some of the ideas of disruptive innovation in his new role at the Globe. Specifically, Skok said disruptive innovation emphasizes making sure that the priorities and resources at the heart of a company are aligned with experimentation. Launching independent sites, and creating clearer lines between Boston.com and BostonGlobe.com, will go a long way towards helping that, Skok said. McGrory sees the Globe's future in similar terms: "It says we're going to be as innovative as we need to be in digital formats, that we're willing to experiment and willing to try new things," he said. _Photo by Scott LaPierre used under a Creative Commons license._
Clay Christensen on the news industry: "We didn’t quite understand…how quickly things fall off the cliff"
NFL teams spend the offseason reflecting on the past season and preparing for the next one. So like any good playcaller, the team at The New York Times' 4th Down Bot is spending the months between the Super Bowl and the start of the 2014 preseason examining what it's learned and how it can improve for the coming season.
The 4th Down Bot performs realtime analysis of every fourth down play in the NFL and determines whether a team should go for the first down, kick a field goal, or punt the ball away. The bot was a collaboration between the Times and Brian Burke of Advanced NFL Stats, who originally built the code. On his personal GitHub today, Kevin Quealy, a graphics editor at the Times, writes about how the project was developed and outlined some of the bot's successes (more than 10,000 Twitter followers!) and areas where it could be improved (a slow response time). The bot launched in time for Week 13 of the NFL season (in the midst of my Lions' annual late season collapse, when the bot was a welcome tool to further question now-fired coach Jim Schwartz's competence). Quealy said the bot probably should've launched sooner — a non-football kind of MVP — and he outlines a few other issues he might try to address in training camp:
Yes, the Vikings should have gone for it on 4th down, and a New York Times robot knows why
It could feel more “live”. The lag between the end of the play and the analysis takes about a minute, but sometimes the delay on the play-by-play data lagged a bit, which meant you were getting bot analysis well after the other team started its drive. This isn't ideal, but there just wasn't much we could do about it. Because it was programmed to analyze decisions that already happened, some aspects of N.F.L. play aren't captured well. For example, when a team intentionally takes a penalty on 4th and 1 near midfield, the bot applauds the punt on 4th and 6 without properly scolding the 4th and 1. This particularly annoyed Aaron Schatz of Football Outsiders, who later got over it. As many statisticians noted, it could display uncertainty better than it does. From my perspective, that's the most legitimate criticism, and we hope to improve on it next year.In his post, Quealy wrote that they're "hoping to introduce a cousin or two this summer, too." In a followup email, Quealy told me he's looking to apply some of the successful aspects of the 4th Down Bot to other fields — possibly baseball or politics. "But it's not like I have a super secret project on my desktop that is already [4th Down Bots] next of kin," he wrote. "Plus, we're hoping to add features and build on our audience with 4th Down Bot, which is not an insignificant amount of work."
Often, when there's talk about algorithms and journalism, the focus is on how to use algorithms to help publishers share content better and make more money. There's the unending debate, for example, over Facebook's News Feed algorithm and whose content they're giving preference to. Or there are people like Chris Wiggins, recently interviewed by Fast Company, who uses his decade long career as a biology researcher and data scientist to advance the mission of The New York Times, from outside the newsroom. But Nick Diakopoulos, a Tow Fellow at Columbia, wants to expand what journalists think about when they think about algorithms.
"One of the key points Evgeny Morozov makes in his last book about the religion of optimization is that we're always just thinking about how to optimize things. The optimal solution isn't always necessarily the best solution, but because it represents some kind of monetary gain, I think it does get the most attention," Diakopoulos says. "As an academic, I think the more interesting questions for journalism now is: How do you integrate algorithms into reporting or storytelling?" Diakopoulos doesn't mean using algorithms to visualize data, though. He wants reporters to learn how to _report on_ algorithms — to investigate them, to critique them — whether by interacting with the technology itself or by talking to the people who design them. Ultimately, writes Diakopoulos in his new white paper, "Algorithmic Accountability Reporting: On the Investigation of Black Boxes," he wants algorithms to become a beat:
Nick Diakopoulos: Understanding bias in computational news media
We’re living in a world now where algorithms adjudicate more and more consequential decisions in our lives. It’s not just search engines either; it’s everything from online review systems to educational evaluations, the operation of markets to how political campaigns are run, and even how social services like welfare and public safety are managed. Algorithms, driven by vast troves of data, are the new power brokers in society.Investigating algorithms is Diakopoulos's main focus at Tow. Over the summer, he did some research into Google's auto-complete algorithm, ultimately publishing his findings at Slate.
Diakopoulos then became interested in how other journalists were getting to the core questions of algorithm building. Specifically, he wanted to know what processes they were using to figure out what goes into an algorithm, and what is supposed to come out. After looking at examples of investigations into algorithms from ProPublica, The Wall Street Journal, The Atlantic, The Daily Beast and more, Diakopoulos wrote up his observations for The Atlantic. The Tow white paper, published last month, is an expansion of those thoughts and inspired a panel at the NICAR conference held last week in Baltimore, where Diakopoulos shared the stage with Frank Pasquale, The Wall Street Journal's Jeremy Singer-Vine, and The New York Times' Chase Davis. "I started developing the theory a little bit more and thinking about: What is it about algorithms that's uncomfortable for us?" Diakopoulos told me. "Algorithmic power is about autonomous decision making. What are the atomic units of decisions that algorithms make?" It's inherently challenging for people to be critical of the decisions that machines make. "There's a general tendency for people to be trusting of technology," says Diakopoulos. "It's called automation bias. People tend to trust technology rather than not trust technology." Automation bias is what causes us to follow wrong directions from a GPS, even if the route is familiar. It's also, some say, what tragically caused Air France Flight 447 to crash into the Atlantic in 2009. In the same way, Diakopoulos argues, we believe that Match.com's algorithms want us to find true love, that Yelp wants to find us the best restaurants, and that Netflix wants to show us the most entertaining movies. Diakopoulos argues that it's a journalist's responsibility to determine whether that's true or if it's just what those companies want us to believe. (Remember: Media companies use algorithms too, which means "some of the same issues with transparency arise" there as well.) "Because [algorithms] are products of human development, there may be things that we can learn about them by talking to the people who built them," says Diakopoulos. In the paper, he writes that "talking to a system’s designers can uncover useful information: design decisions, descriptions of the objectives, constraints, and business rules embedded in the system, major changes that have happened over time, as well as implementation details that might be relevant." But more often than not, interviewing engineers is not an option — even government algorithms are protected by a FOIA trade secret exception that shields third-party source code. There is at least one known instance of source code being successfully accessed via FOIA:
@ceodonovan @ndiakopoulos @mattwaite Here's an example from @morisy. Code: https://t.co/cbpGMrT6K2 Explanation: https://t.co/8brKZamABg -- Jeremy Bowers (@jeremybowers) March 3, 2014But ultimately, Davis said, reporters shouldn't expect to get much information out of the government. "Reporting on these things and getting information out of institutions is similarly black-box. These things are protected trade secrets. FOIAing source code, if it was put together by a government contractor, or even maybe the government itself? Good luck. Probably not going to happen." At NICAR, Pasquale told journalists to pay attention as laws about accessing technological information develop. "I would recommend that everyone in this audience, to the extent that journalists can be advocates, that we closely follow the workshops on data brokering and algorithm decision making that the FTC is putting on," he said. If you can't talk to the people that wrote the algorithm, what next? "When corporations or governments are not legally or otherwise incentivized to disclose information about their algorithms, we might consider a different, more adversarial approach," Diakopoulos writes. That approach is reverse engineering, which basically means putting things into an algorithm, observing what comes out, and using that information to make some guesses about what happened in between — the zone of the black box. It's an idea supported by a basic principle of engineering: Taking something apart is a good way of learning how it works.
For example, when ProPublica decided to learn more about personalized emails being sent by the Obama campaign, they launched a project called Message Machine. The idea was to reverse engineer the algorithm by crowdsourcing copies of campaign emails and comparing the messages to basic survey data. When The Wall Street Journal and Singer-Vine wanted to find out whether or not online vendors like Staples were selling items for different prices based on who the shopper was, they also used reverse engineering. The paper built proxy servers and developed user profiles to create "painstakingly simulated inputs" which they compared to the price outputs. What they found was that stores seemed to be varying their prices based on where customers were located geographically.
Hacking in the newsroom? What journalists should know about the Computer Fraud and Abuse Act
"If you think about a cell in your body, a biologist is reverse-engineering that cell to understand how it works. They're poking and prodding it, and seeing how it responds, and trying to correlate those responses to come up with some theory for how that cell works," Diakopoulos says. Citing Paul Rosenbloom's book _On Computing: The Fourth Great Scientific Domain_, Diakopoulos argues that some of our algorithmic systems have become so large, they're more usefully thought of as resembling organisms rather than machines. "The only way to really understand them is by applying the same techniques that we would apply to nature: We study them with a scientific method. We poke and prod them and see how they respond." Singer-Vine spoke to the same issue at NICAR. As algorithmic systems grow and grow, finding a language to explain what you've learned without using misleading metaphors or implying causation where it doesn't exist is only going to get more complex. "To translate your findings is only going to get harder," he says. "If an algorithm is so complex that nobody can explain why one thing happened because of another, maybe that's a story in and of itself." Of course, there are challenges and risks to using reverse engineering. Ultimately, while it can sketch a story, it can't tell you exactly why or how engineers wrote an algorithm. "Just because you found a correlation between two variables in reverse engineering, doesn't mean there was some programmer who wrote that algorithm who decided that it should be that way. It might have been an accident. There may have been some complexity in the algorithm that was unforeseen. You can't make the inference that this was done on purpose," says Diakopoulos. Nick Seaver, who studies algorithmic cultural phenomena at UC Irvine, says that while Diakopoulos's examples of journalistic reverse engineering are intriguing, he's ultimately more of a pessimist regarding reverse engineering's potential as a reporting tool. In an email, Seaver writes:
The systems we're talking about are complicated and change very rapidly. Words that are not blocked today may be blocked tomorrow, and if we're dealing with machine learning algorithms, they might be blocked for reasons that even their own engineers don't immediately understand. These systems are also constantly under A/B testing, so there is more than one system in play at any given moment. […] As a result, on major websites, there can be literally millions of different permutations in use at any given moment. So, reverse engineering, while it is miles ahead of what many journalists do when it comes to reporting on algorithms, is up against some significant challenges.At NICAR, Pasquale brought up another example, Nathan Newman's 2011 investigation into potential racial implications for the algorithm that determines the personalization of Google ads. While Newman's findings were interesting, Pasquale says, "Google says you can't understand their methodology based on personalization." Their full rejection of his claims were:
This post relies on flawed methodology to draw a wildly inaccurate conclusion. If Mr. Newman had contacted us before publishing it, we would have told him the facts: we do not select ads based on sensitive information, including ethnic inferences from names.But Diakopoulos sees a future where the tactics of reverse engineering and the tools of reporting are blended into a unified technique for investigating algorithms. "You see how far you can get — see if you can find something surprising in terms of how that algorithm is working. Then, you turn around and try to leverage that into an interview of some kind. At that point, the corporation has an incentive to come out and talk to you about it," he says. "If you can combine that with observation of the system through reverse engineering, then I think through that synthesis you get a deeper understanding of the overall system." As journalists charge after this potential new trend in data journalism, however, it's worthwhile to remember this note from Diakopoulos on the consequences of revealing too much about the algorithms we rely on. He writes: "Gaming and manipulation are real issues that can undermine the efficacy of a system. Goodhart’s law, named after the banker Charles Goodhart who originated it, reminds us that once people come to know and focus on a particular metric it becomes ineffective: 'When a measure becomes a target, it ceases to be a good measure.'" During the NICAR panel, Singer-Vine raised the same issue, pointing out that making the mechanics of algorithms publicly available also makes them more "gameable." (Ask all the SEO consultants for whom understanding Google's search algorithms is the holy grail.) But Pasquale pushed back on this concern, arguing, "Maybe we should try to expose as much as possible, so that the people involved will build more robust classifiers." Diakopoulos' plans to move forward with algorithm investigations are primarily academic. Although the paper concludes it's too soon to consider how best to integrate reverse engineering into j-school curricula, Diakopoulos says it's a major interest area. "I think if we taught a joint course between journalism and computer science, we would learn a lot in the process," he says. Primarily, he's interested in finding more examples of journalists using reverse engineering in their work — not that he has any shortage of his own ideas. Given the opportunity, Diakopoulos says he'd be interested in investigating surge pricing algorithms at companies like Uber, exploring dating site algorithms, looking for bias in selection of immigrants for deportation, and more. He's also had conversations with Columbia's The New York World about collaborating on an investigation of New York municipal algorithms. For journalists who want to be thinking more critically — and finding more stories — about the machines we rely on, Diakopoulos offered some advice on newsworthiness. "Does the algorithm break the expectations we have?" he asked. "Does it break some social norm which makes us uncomfortable?" _Visualization of a Toledo 65 algorithm by Juan Manuel de J. used under a Creative Commons license. _
If you watched the Oscars Sunday night, you probably saw John Travolta screw up the name of Idina Menzel as "Adele Dazeem": It got lots of attention online, and Slate jumped on it with The Adele Dazeem Name Generator, which tells you how John Travolta might mispronounce your name. (If you're interested, the Nieman Lab staff is made up of Jorja Brazent, Julian Edjans, Chantelle Orteez, and Jonah Warshington. Also, "Nieman Lab" is Niven Loing.) The story went _insane_ on social media and became the most popular story in Slate history. Which led Slate editor David Plotz to tweet:
Definition of ambivalent: The John Travolta name generator is the most popular story in Slate history. http://t.co/nT4kJp7J2Q -- David Plotz (@davidplotz) March 4, 2014That ambivalence, I'd wager, is a lot like the feeling some had in The New York Times' newsroom when they realized the most popular Times story of 2013 was a dialect quiz made by an intern. Those reporters and editors get into the business to do certain kinds of work, but the factors driving today's news web — the availability of analytics, the rise of social sharing, and what remains of the CPM-driven advertising model — mean it's increasingly clear that popularity doesn't always line up with the work you'd want mentioned in your bio. I emailed Plotz to get a little more about that ambivalence, and here's what he had to say:
Ambivalence is not quite the right feeling. Maybe bemusement. The Travolta name generator is a delightful piece of work that brings pleasure to millions — literally millions — of readers. It's fast, fun, and on the news, and I am unbelievably proud of the clever work that went into it, and glad for the joy it has brought so many readers. On the other hand, I have to giggle that this project is attracting millions of readers, and crushing stories about Ukraine or Obama under its boot. And on the third hand, one of the most popular Slate stories in the past six months before Travolta came along was Josh Levin's The Welfare Queen, an 18,000-word masterpiece about the woman Ronald Reagan villainized for bilking the government, who turned out to be even more fascinating, and loathsome, than Reagan ever could have imagined. So our readers go low with us, and they go high with us, and, like Pharrell, we're happy either way.
Maybe the third time is the charm. Three years before Don Graham and Jeff Bezos talked about selling and buying The Washington Post, the Graham family bid goodbye to its second favorite son, Newsweek. Sid Harman, then 91, optimistically bought it for a buck, taking on $50 million in liabilities and promising a rebirth. Within three months, Harman struck a deal with Barry Diller and Tina Brown merging Newsweek with The Daily Beast, creating an unusual creature that could not be easily cataloged. By the middle of 2012, Harman had passed away and his family stopped investing in the business, leaving it fully in the hands of Diller's IAC. A year later, acknowledging failure, IAC sold Newsweek — once a proud No. 2 to the inventor of the newsweekly, Time — for a pittance to IBT Media. "I wish I hadn't bought Newsweek," Diller said, calling weekly print in the digital era a "fool's errand." The buyer seconded that notion. "We are 100 percent digital company and so that's been our forte," said IBT CEO Etienne Uzac. "In the future there might be some opportunity for print but not right now." The siren call of print, though, is just too seductive, even as each successive ownership has gotten a generation younger. Uzac, 32, and IBT cofounder 31-year-old Johnathan Davis share neither the name recognition nor the longstanding love of print with the Grahams, Harmans, Browns, or Dillers — but print beckoned to them nonetheless. This Friday, the new print Newsweek, which hadn't been published since the end of 2012, hits the stands. Maybe "hits" is too strong a word. At a whopping $7.99 a copy, it should perhaps land on a soft pillow, the better to protect the paper quality that is promised to be twice that of long-time nemesis Time.
It may seem like a turn of events around a familiar storyline: the reckoning of once-iconic news brands, national and local ("The newsonomics of the print orphanage, Tribune's and Time Inc.'s"). In fact, the IBT's Newsweek gambit is more interesting. It's a mixmaster of many trends of our day, including the ascendancy of reader revenue models, the central place of powerful platforms in digital publishing, and the use of analytics to drive new news businesses forward. The Newsweek push and IBT Media's wider efforts combine lessons from Vox Media, The Economist, The New York Times, and even Time. The newest Newsweek strategy is both old-fashioned and radical. It's old-fashioned in the sense that it is reviving a ghost print brand with printing presses on two continents. It's radical in its pricing. Even the high-flying, high-quality weekly New Yorker only charges about $79 a year, while Time goes for $30 and The (monthly) Atlantic for about $25. Newsweek is going _way_ beyond those prices. The print launch is all wrapped up in a lovely package, led by magazine veteran Jim Impoco. In fact, that package — with design by Robert Priest and Grace Lee — seems almost anachronistic, a throwback to another era of plush and flush magazines. In a sense, the newest Newsweek is trying to create its own category: NewsLuxe. If $8 seems a bit rich for a newsmagazine, subscribers can pay the equivalent of a still-high $3 a week for a new subscription. The new Newsweek, I've found out, is joining the all-access parade with this planned pricing: * $149.99 for a year initially, providing mailed magazines and access via desktop, tablet, and smartphone. * $39.99 for a year initially for digital-only access. There will be an initial $9.99 offer for three months digital-only access; that price is then slated to move to $12.99. The pricing had better work. The Newsweek paywall is going up this week, as the print launch (party at SXSW) is readied. Davis, who also serves as chief content officer, says the business model is heavily dependent on reader revenue. He expects that 80 percent or 90 percent of the new Newsweek's income will come from readers, with only 10 to 20 percent from advertising. Whether this experiment turns out to be a great new way to fund journalism or the makings of a spectacular failure, you have to give Davis and Uzac credit for rethinking old paradigms. They'll tell you they didn't expect to go into print. They looked at the low-priced print model they inherited — "the more you printed, the more you lost" — and figured the business is "upside down." With the new high pricing, each copy printed, they say, will be profitable. To be sure, Newsweek isn't planning on selling hundreds of thousands of print subscriptions. Its goal, says Davis, is 100,000 U.S. subscribers and 100,000 in EMEA (Europe, Middle East, Africa) within 12 months; other licensing agreements will produce other editions on other continents. Its initial press run: 70,000. That's down from about 3.3 million at Newsweek's height. The bet: that many of older Newsweek subscribers, longing to renew their habit, will re-subscribe. The company is poised to mine its lists of somewhere between 200,000 and 400,000. Those names and data have now passed through several owners and incarnations, so their reliability is a significant question for subscription conversion.
The newsonomics of the print orphanage — Tribune's and Time Inc.'s
While Newsweek will get lots of attention this week for its seemingly anachronistic print launch, its digital pricing also breaks new ground. It's going to market with a metered paywall, one powered by Europe's Piano Media, now making its first big foray into the U.S. after successful launches in central and eastern Europe and more western European publishers queuing up. The paywall isn't unusual; the pricing is. Charging $39.99 for digital-only access is high, especially considering that the _seeming_ competition, Time, can be gotten — in both print and digital — for no more than $30 a year. What makes the revivers of a ghost brand believe customers will pony up? "We'll deliver the quality of a monthly every week," says editor Jim Impoco. Impoco is a well known talent in the magazine world, with U.S. News, Fortune, Men's Journal, and the short-lived but highly regarded Portfolio magazine among his career stops. That means investigative reports and storytelling and visuals that fit the quality of the paper they are printed on. Impoco's acceptance of the Newsweek job was in part based on the place he believes it can occupy in the _contemporary_ world, with more than 2 million Twitter followers.
The newsonomics of Piano Media
We're back in print! @nytimes has the details on our new print magazine: http://t.co/TrB268fqQ9 #LONGLIVEPRINT -- Newsweek (@Newsweek) March 3, 2014
He's assembled a talented staff of 30 full-timers, plus additional contractors. Impoco says he doesn't believe that the star system is the new thing in digital news ("The newsonomics of David Pogue and the Pujols effect"). The star is the brand, the up-from-the-ashes Newsweek. Its New York staff is largely filled out, and its London newsroom got a jumpstart today, as it named Richard Addis to head up its EMEA edition, out of London. Addis is the former editor of Canada's Globe and Mail and London's Daily Express. Under Impoco, the new Newsweek should be an impressive, worthwhile read. The question is whether it can be _sufficiently_ impressive to win large paying audiences fairly quickly. Even if it offers a lively product online, the paywall is sure to send many testers returning to the free Slates, HuffPosts and BuzzFeeds. The meter will allow some sampling, of course, but it is going to be tough to get readers to click that $9.95-for-three-months button, much less the $40-for-a-year one. What could determine the success or failure of the new Newsweek?
The newsonomics of David Pogue and the Pujols Effect
* CONSIDER THE GLOBAL ANGLE. Newsweek — like the about-to-be sold Forbes ("The newsonomics of Forbes' real performance and price potential") — may have more value globally than domestically. We can make a case that these iconic magazine brands experienced their U.S. heyday a decade or two ago. While they aren't exactly Coca-Cola overseas, they still exude a brand resonance that is attractive to buyers. * LIMITED SAMPLING. Metered paywalls can work, but most of the paywall-related money earned comes from existing print subs. Newsweek is rebuilding from a zero base there; lapsed former print subscribers will be hard to convert, especially at the high prices. Obtaining new digital-only subs is a painstaking, slow-building process. It may have made more sense to find sponsors or underwriters to offer two to three months free of the new digital Newsweek — and then move to a metered paywall. Maybe that old Newsweek name — even if the content is totally new in presentation and style — will bust down the paying doors early, but I doubt it. * WHAT'S NEWSWEEK ONLINE AND WHAT'S THE WEEKLY EDITION? Impoco has been putting out an iPad-oriented e-edition weekly since mid-fall; you can see it on the site today. As the new pricing goes live, the meter will run against stories on both the ongoing site and the weekly "edition." Maybe it is worth persevering with the artifact of a weekly digital "edition," but it confuses me as a reader, just as the efforts of both Time and The Atlantic have done. Maybe the twin identities won't put off buyers, but simplicity of presentation seems to make the most business sense. * THE EXPERIENCE AND INEXPERIENCE OF IBT'S CO-FOUNDERS. The headline on the Times story on the Newsweek print launch: "Tiny Digital Publisher to Put Newsweek Back in Print." IBT Media is a smallish — and fairly anonymous — company. Its flagship is the International Business Times, with more than a half-dozen separate IBT sites outside the U.S. and several niche brands, including Latin Times and Medical Daily. The numbers IBT offers are these: — $21 million in 2013 revenue, with profits of about $500,000 last year. Davis says the business has been profitable since 2010, four years after its founding. The main revenue is digital advertising. Davis says the site is able to pick up advertising CPM rates in the $8-12 range using ad networks. — 48 million global and 24.3 million U.S. unique visitors. — 150 editorial employees, about half in New York City and half in London, among 240 overall. So who are these guys? They are self-funded, bootstrapped entrepreneurs; Johnathan Davis worked first as an engineer, not a journalist, and says he was pretty much Writer No. 1 for the International Business Times as well as its coder. Uzac got the idea for IBT and started working on it as he studied at the London School of Economics. The Times story made public the Moonie murmurs about the company, which they dismiss. Common to their work: scalable technology and analytics drive much of the business. They're savvy at the tricks of the digital trade to spur traffic. International Business Times takes in no reader revenue and is almost wholly reliant on advertising. The IBT product is surprisingly traditional and newspaper-like in its style and presentation; it doesn't emphasize voice. In fact, it's hard to categorize what indeed IBT _is_. It lacks the global-centric spunkiness of start-up Quartz or the always-something-happening vitality of Business Insider. Its staff costs are relatively low, and you won't see it much quoted around the web. It does, though, seem to percolate along. With that background, neither cofounder has much experience with reader revenue, premium products, or top-level national journalism. Nor does either have experience with print, or magazines. Davis says the new Newsweek is based in part on The Economist's higher-priced, reader-revenue-oriented strategy — though he hastens to add its attitude is different. Newsweek is a big, splashy, upscale play. Everything about it — new redesigned print, atmospheric pricing, an early paywall — says premium, yet its young owners have little to no experience in the premium world. They may bring a refreshing skepticism about the way print magazines have long been done — the conventional wisdom of building a large, advertiser-satisfying subscriber base on cheap subs — but they may be getting in over their heads. There may be room for the new model — and kudos to the newest Newsweek owners for testing it. If it does work, experience tells us it's likely to take several years to prove out, and that may test the financial limits of this bootstrapping effort. As we approach mid-2014, we've got Newsweek getting one more (maybe its last) chance at survival and Time shape-shifting once again, as Time Inc. spins off uncertainly into the publishing future. For a genre long known for cycling back to the same subjects over and over, the cover story of the year appears to be: _Can Newsweeklies Survive the Digital Age?_
The newsonomics of Forbes' real performance and price potential
Data is at the heart of much of ProPublica's reporting. So why not try to find a new way to make money off of your franchise? With the launch of the ProPublica Data Store, the nonprofit is trying to see if it can turn one of its strengths into a potential revenue generator. The Data Store offers a selection of datasets — some for sale, with prices varying depending on the user, some free. The information in the store is a mix of raw data ProPublica has received through FOIA requests, data already available on the open web, and datasets that have been cleaned and prepared extensively by ProPublica staff for other investigations. While the raw and open datasets are free, the data cooked by ProPublica comes with a price tag attached. It's a setup similar to NICAR's Database Library, which offers journalists clean and formatted government data on things like plane accidents, federal contracts, and workplace safety records. For users wanting to get their hands on a state's worth of data from ProPublica's Dollars for Docs series, for instance, the cost varies: $200 for journalists, $2,000 for academics. Companies looking to use the data for commercial purposes have to negotiate a (presumably higher) price with ProPublica. Like any good business, ProPublica offers potential customers free samples of the data before they make a purchase.
ProPublica has always encouraged a level of openness with its work, often making investigations available to others by Creative Commons, or building news apps that allow readers to play with data. The data store is an extension of that, but also a potential solution to a question many newsrooms face: how to extract additional value out of an investigation. But don't expect the store to be a significant source of revenue, at least right away, according to Richard Tofel, ProPublica's president. "It will take a while for us to see if that's a serious revenue source or not," Tofel told me. Tofel said the company has fielded requests for commercial use of its data in the past. He said that could be a source of business of the company, if the interest materializes. The broader goal of the data store, he said, is providing an easier way for people to access information ProPublica has at its disposal. "The net effect of this initiative is to make a lot more data publicly available without having to go through us," he said. Scott Klein, senior editor for news applications at ProPublica, said one purpose of the data store was to create a standardized system for the flow of data through the newsroom. As a repository, the data store can be a resource to point journalists and academics to what records are available for free online. But Klein said the store also expands on the idea of encouraging others to build on ProPublica's work. In building the data store, Klein said they wanted to develop pricing that would account for the hours of work his team put into the datasets while also being fair to journalists and academics. "It's not uncommon for us to spend months cleaning and assembling datasets," Klein said. There's no revenue targets or other goals associated with the project. Both Klein and Tofel said they're eager to see the response to the data store and if it can gain traction. Klein said he believes if the experiment is successful, one idea they could consider is _à la carte_ datasets created specifically for others to purchase through the data store. "One of the ways of figuring out if there is a market for this, and how to serve this market, is to just try it," Klein said. _Photo of the Data Food Store in Molenbeek-Saint-Jean by Paul Keller used under a Creative Commons license._
ProPublica: Why we use Creative Commons licenses on our stories
The reason Henry Abbott started writing a blog was simple: It seemed like the only viable route he had to being a sports writer. That was almost a decade ago. Now the founder of the NBA blog TrueHoop will be taking over the reins of basketball coverage on ESPN.com. Abbott's ascent was a gradual one; after ESPN bought TrueHoop in 2007 it expanded the blog into a network of similarly inclined (analytical, passionate, bordering on obsessed) up and coming beat writers. Abbott's story was an early version of one increasingly familiar in journalism today: the outsider brought in to help a media company build online savvy and reach new audiences. It's also a formula that has worked well for ESPN, from Bill Simmons and Grantland to Nate Silver's new FiveThirtyEight. As the NBA deputy editor for ESPN.com, Abbott will have a different kind of task. Instead of building a new franchise from the ground up, he'll have to apply lessons from TrueHoop to take ESPN's NBA coverage in a new direction in order to meet fans needs — and better compete with the future TrueHoops of the world. Abbott's excited about what comes next, but realistic about the challenges facing everyone in the media business. "All in all, I think the exciting and terrifying part of this is we really can't do things the way they've always been done," he told me. In our conversation, we talked about how the path to becoming a sports writer has changed, what kind of coverage NBA fans expect, my poor Minnesota Timberwolves, and the rise of sports analytics. Here's a lightly edited transcript of our conversation
JUSTIN ELLIS: What do you think NBA fans are looking for in their coverage of the league today?
HENRY ABBOTT: I guess I'm in step with everybody in this industry in that I don't really know. We know it was different than it was in the past, right? But we're not navigating roads where we can say "Oh, we'll follow these signs — this is how you achieve success." What I'm thinking about is we're navigating by stars. Some of the stars [the audience] want it mobile, they want it fast, they want it accurate, they want it thoughtful, they want it with good storytelling. I think we're probably too distant from the players. Social media is making it clear players have all this infinite personality, and I think the audience wants it up close. They want to feel the character of the game too. We're trying to achieve all those things and how that's best done is a process of experimentation for the next couple decades.
ELLIS: Contrasting when you started writing and today, there was that whole era where people were questioning whether blogging was a good or a bad thing for journalism. How do you think blogging has changed sports coverage and reporting on the NBA?
ABBOTT: It definitely shook up the snow globe big time, right? It used to be a very small channel of entry to be able to write about basketball all day every day. Basically, you had to know your local newspaper editor and get entrusted with a beat job or covering high school sports. That was just one subset of the population who got to have an audience on basketball. Blogging just let everybody who wanted to try it try it. And that was a subject of concern for a lot of people. I think most of the concern boiled down to "With no barrier to entry, do these people who are doing this have any reason to do this? Should we believe them and are they accurate?" I guess the answer is now all over the place.
The blogging that matters to me, that has at the forefront of the TrueHoop network, and that has launched a lot of careers over the last few years, is blogging where people are very scrupulous about being accurate. You can't shoot from the hip and say, "This guy is a jerk." You have to make evidence-based decisions. I think blogging has opened the door to everybody, but what's especially interesting is opening the door to this kind of new, more analytical, evidence-based thinking. Which is interesting and important. At the Sloan Conference, researchers from all over the world know all kinds of bits and pieces of things that totally matter to the game of basketball we've all known and loved our whole lives. What package does that go in? Is that a news story? That three-pointers are more valuable than we thought they were? It doesn't really have that _urgency_, but it's massively weighty if you care about basketball. I think blogging has been a conduit for that kind of knowledge, generally. Most people who write about that stuff started blogging. That's something I appreciate about blogging: the idea of just letting in people.
ELLIS: The NBA has people who do coverage on the NBA. It has its own network. And at the same time, you have players with their own Twitter accounts who can connect directly with fans. How has the speed of digital media from the league and from players affected the way journalists cover it?
ABBOTT: The rosy answer to that is that it's harder to lie. There's so many different people chiming in to call you on it if you do.
I wrote for magazines — including the NBA's official magazine — and I don't know that we ever heard from anybody about anything. You just wrote what you wrote, did your best. Nowadays everything is reacted to and cross-checked and triple-checked within seconds. You have to think really hard about exactly how you're gonna break that. I think we're digging into things with more accuracy, all in all, than before. Which is great. The downside is it's kinda a mess. It's just hard to figure out what's going on minute to minute. Where's the handy rundown of what matters today? Everything is all over the place and there are so many platforms and channels to keep track of.
ELLIS: When TrueHoop became a network, what was the benefit of building a network of writers? And looking at it today, how well do you think it worked?
ABBOTT: It solved a lot of problems. One problem was there was a lot of talent in the long tail, as they call it. There are all these really good writers out there who — where are they gonna go, what are they going to do? It seemed nice to affiliate with all these smart, hard-working people. I don't think anyone was really in a position to be like "Hey, we got jobs for everybody!" But we could offer them some kind of platform and endorsement. These are really earnest, hardworking, truth-telling bloggers. That's still the reason to keep it going. But I think what we've found as it progresses is that the best stuff, we don't want to have on an affiliated blog — we want to have it on ESPN.com. So all these characters — like Ethan Sherwood Stauss, who's 100 percent a product of the network, but we elected to give him space on ESPN all the time. And I think that's been great for us.
ELLIS: You were an outsider bringing TrueHoop to ESPN, and now you look at someone like Nate Silver, who's was brought in first at the Times and now into ESPN. What does it say that people who started as outsiders are being brought into these large media companies?
ABBOTT: I think it goes back to what I was saying in the beginning, that there's not a well-worn path here. I think the ESPN honchos have been pretty wise to recognize that things are going to be different in the future. There are no push-button solutions, but there is this kind of band of oddballs like me who've been scouting around the fringes for a while and have some sense of what feels like it might be handy. It's shifting in a digital way. It's shifting in a multimedia way. It's shifting in an evidence-based way. Daryl Morey's weird stat geek conference is suddenly the epicenter of networking and hobnobbing for NBA jobs. That's a shift. I just think there are more and more editors, and people in positions of power in publishing who are like, "You know, those guys who have been out there on the web doing this for a while? They know things we want to know."
ELLIS: Statistics have gotten smarter and more complex, through things like Player Efficiency Rating or this presentation at Sloan about Expected Possession Value. How do you think this explosion in stats has helped people understand the league?
ABBOTT: For me, it's not a hobby. For some people, stat geekery as a category is in and of itself fun. I think I'm probably like most readers where I kinda want to get into this. Neurological research these days is so fascinating, because we thought humans worked one way, but now that they do MRIs and learn about hormone secretions and all these things. There's so many parts of us we leave to vague descriptions from doctors who didn't really know — to now it's, "No, this is how your brain operates." Basketball is working like the brain, where now — you're describing this research — as the ball moves around the court, we're going to know the expected points value of that possession, moment to moment. Which suddenly means you gotta pass that ball to the open man right there. Now we're actually saying the expected points for that possession go from .69 to 1.1 — and that's how you win a game. Which is what we all want to know. So the fact of the matter is the best knowledge we have is that complex now — or a lot of it is. It's not categorical, it's not emphatic, but it's insightful as heck. I think that's where a lot of the interesting knowledge is now. When you put on your little detective hat digging for the truth, you end up talking to a lot of Ph.D. students, with their spreadsheets and their SportsVU. Whereas you used to talk to the trainer, I guess? That's where the insight is right now in a lot of cases, so that's where we have to go find it.
ELLIS: One of the things you've been doing a lot lately with TrueHoop is TrueHoop TV. How does that add to what TrueHoop is doing and what does it provide to readers
ABBOTT: From the early days of TrueHoop, I felt like I wrote a lot of long, boring, smart things where I was just sure I was right about everything. But the fact is not many people wanted to read that. So I've learned a lot from Royce Webb and Chris Ramsay and people here at ESPN about how do you package ideas so they're more inviting to a general audience. I don't want to be the expert of experts in some ivory tower somewhere. I want to actually get ideas across to basketball fans. So TrueHoop TV is just way more inviting. I can't do all the same essay-ish stuff, but you can watch it in five minutes on your phone and it can be insightful. All kinds of people at ESPN have all this knowledge, and I just think short-form original video conversations that are fun and inviting are probably one of the better tools we have to get that across to people. And people seem to like it. Who thought that two oddballs talking from their desks by Skype would ever get 300,000 people to watch it? But that actually happens some times, which is amazing. And encouraging.
ELLIS: So Twitter. It's a way to keep updated on stories and follow writers you like, but it also becomes very interesting in real time during a game. Watching something like the All-Star Game, which is always a so-so affair, becomes more interesting when you're following it on Twitter. What's your approach to social media?
ABBOTT: I mean, I love it for the eight times a day there's something I couldn't have known any other way. I hate that I have to wade through 5 gazillion things to get to those eight. There's no simple solution there. So you're a Timberwolves fan. If you're out with a friend at dinner and coming back from the bathroom you want to know how the Timberwolves game is going, you probably want to know the kind of stuff that's on Twitter. But you don't have time to scroll through everything that Twitter has to say about that without delaying your dinner. That's a riddle to solve, the density problem — how do we get higher density of the best stuff from social media.
ELLIS: You've been moving on this arc, from TrueHoop, to the network, and now this, where it seems like you get further and further away from writing. This might be a silly question, but do you want to have more regular opportunities to do the types of writing you were doing when you started out?
ABBOTT: That was a big decision and a big thing to think about in doing this job. I basically just told myself: Let's try it. I'm gonna put my head down for a few years and just do this job, and I've got so many things I'm excited to do in this job. If at the end of those two years, we figure out that I just miss writing so much, there's always that. Also, it's not like I can't write. It's just a question of time management. If there's something I'm just dying to write, I'll just write it. But I'll definitely have less time for that. And it's not really fair to all the great writers here to take time from helping their stuff get the best spotlight it can, where I'm just like "Oh, no, I'm working on _my_ story right now." I don't want to compete with Marc Stein for anything. I think I'll do a lot less of that. I don't know how much I'll miss it. I occasionally do find myself on the phone with some other writer here saying, "Hey, you should really write this, but you should write it this way, and you should say this, and say this." And then I have to stop because I realize what's actually happening is I'm writing over the phone. We'll see how much that happens and how much I annoy people._Photo of Justin's unfortunate Timberwolves playing by Doug Wallick used under a Creative Commons license._
One major topic of discussion across all four days of last weekend's NICAR (or five days, depending how hardcore you are) was the advantages of static news apps versus dynamic news apps. The conversation ultimately resulted in a staged debate on Saturday evening.
For NPR and static apps: @jeremybowers and @onyxfish For ProPublica and dynamic apps: @kleinmatic and @thejefflarson -- Tyler Fisher (@tylrfishr) March 1, 2014But I had a problem:
@laurenrabaino I hope it opens with them explaining what that means. -- Caroline O'Donovan (@ceodonovan) February 27, 2014Enter Noah Veltman, with a blog post for dummies explaining exactly what the difference really is.
A static website is a vending machine. A dynamic website is a restaurant.How so?
When people talk about a static server, they are talking about a server set up like a vending machine. When you ask for a URL, like http://www.nytimes.com/cheetos.html, there is an actual file already on that computer called cheetos.html, and all it has to do is send you a copy of that file. Talking about "flat files" refers to the same thing: having actual files that match the URLs people might request. You punch E6, you get the Cheetos that are already in the machine, packaged, ready to eat.Versus:
When people talk about a dynamic server, they are talking about a server set up like a restaurant. The food is made-to-order. The kitchen has ingredients, and the cooks assemble those ingredients into the finished product only AFTER someone orders that dish. In this version, if you ask for a URL, like http://www.nytimes.com/crabcakes.html, crabcakes.html doesn't exist yet. There is no such file. It's just a menu item. The server waits for your request, and when the request comes in, it uses ingredients (e.g. templates, a database, the Twitter API) and a "recipe" to create that page on the spot, just for you. When people talk about the "back end" or "server side," they are talking about the kitchen: the stuff that a server does when it gets a new request.The post goes into further detail about how those core differences effect content delivery and newsroom developer workflow.
EDITOR'S NOTE: There’s a lot of interesting academic research going on in digital media — but who has time to sift through all those journals and papers? Our friends at Journalist's Resource, that's who. JR is a project of the Shorenstein Center on Media, Politics and Public Policy at the Harvard Kennedy School, and they spend their time examining the new academic literature in media, social science, and other fields, summarizing the high points and giving you a point of entry. Here, John Wihbey sums up the top papers in digital media and journalism this month.Over the past month, the scholarly world has been cranking out new insights — some profound, some obscure, and some useful for newsrooms and media producers of all kinds. Meanwhile, Nicholas Kristof has kicked off yet another round of debate about whether academics are engaged enough (see Ezra Klein for the latest salvo, on gated academic journals and their consequences). Amid all that, there are indeed some good nuggets coming from the halls of academe. Recent themes: _Know thy network. Beware the rise of journo bots. And milk those Twitter users for cash._ More on those below, where you'll find a sampling of recent papers and their findings. "Mapping Twitter Topic Networks: From Polarized Crowds to Community Clusters": From the Pew Research Internet Project. By Marc A. Smith, Lee Rainie, Ben Schneiderman and Itai Himelboim. This important new study, done in collaboration with academic researchers Schneiderman (University of Georgia) and Himelboim (University of Maryland), goes a long way toward making social network analysis and theory intelligible to the general public. In a clean, straightforward way, it lays out the six basic "archetypes" of Twitter conversation, giving precise language to phenomena many of us observe at only an intuitive level (and yet which researchers have observed for some time). Having analyzed millions of tweets, the researchers conclude that political discussions often show "polarized crowd" characteristics, whereby a liberal and conservative cluster are talking past one another on the same subject, largely relying on different information sources. Of course, you still see old "hub-and-spoke" dynamics, or "broadcast networks," where mainstream media are still doing the agenda-setting. But there are novel networks, too: "Support" networks that form around customer complaints, which looks like "hub-and-spoke" but also involves more two-way conservation; "tight crowds" involving niche interests, hobbies and professional groups; "brand clusters" around topics of mass interest (celebrities, for example) that primarily feature "isolates," or people talking about the same subject but not to one another; and "COMMUNITY CLUSTERS" THAT "LOOK LIKE BAZAARS WITH MULTIPLE CENTERS OF ACTIVITY" AND WHICH "CAN ILLUSTRATE DIVERSE ANGLES ON A SUBJECT BASED ON ITS RELEVANCE TO DIFFERENT AUDIENCES, REVEALING A DIVERSITY OF OPINION AND PERSPECTIVE ON A SOCIAL MEDIA TOPIC." Related: A new study in the Journal of Communication, "Social Media, Network Heterogeneity, and Opinion Polarization," by Jae Kook Lee, Jihyang Choi, and Cheonsoo Kim of Indiana University and Yonghwan Kim of the University of Alabama, demonstrates the importance of news-related activities on social networks. Getting news, posting news and talking about politics on Twitter and Facebook seems to be associated with having a more diverse social network. Overall, the "role played by social media in the realm of public opinion is not simply optimistic or pessimistic," the researchers conclude. "The battle for 'Trayvon Martin': Mapping a media controversy online and off-line": From the MIT Center for Civic Media, published in First Monday. By Erhardt Graeff, Matt Stempeck, and Ethan Zuckerman.
The Lab's Caroline O'Donovan has already published a wonderful explainer on this study — worth checking out if you missed it. The study represents an ambitious effort to map public discourse around a national news topic — its ebb and flow, its catalysts, magnifiers and gatekeepers alike. How exactly do stories move across the wide array of information channels we use? The researchers conclude: "OUR ANALYSIS FINDS THAT GATEKEEPING POWER IS STILL DEEPLY ROOTED IN BROADCAST MEDIA…WITHOUT THE INITIAL COVERAGE ON NEWSWIRES AND TELEVISION, IT IS UNCLEAR THAT ONLINE COMMUNITIES WOULD HAVE KNOWN ABOUT THE TRAYVON MARTIN CASE AND BEEN ABLE TO MOBILIZE AROUND IT." Effective public relations by parties involved saved the story from vanishing initially, and social media jumped on the bandwagon only later. Graeff, Stempeck, and Zuckerman contribute important insights into the networked ecosystem of communication and news. The paper is a direct follow-on to an earlier paper by Internet theorist Yochai Benkler and Co., which suggested new network dynamics at work around the Stop Online Piracy Act (SOPA/PIPA) and related online activism. Both papers leverage the underappreciated Media Cloud project, which is finally getting its due. Graeff, Stempeck, and Zuckerman basically show a kind of counter-example to the Benkler findings. This scholarly back-and-forth is well worth paying close attention to, as MIT and Harvard's Berkman Center have more papers in the pipeline along these lines. If we are to answer the ultimate digital media question — "How much has the Internet truly changed communication?" — this research will be a vital resource in providing the data. "Enter the Robot Journalist: Users' Perceptions of Automated Content": From Karstad University (Sweden), published in Journalism Practice. By Christer Clerwall. The study sets out to see how, among a small sample of undergraduates, people might judge differences between news content written by human journalists and by computers. The sample articles focused around National Football League topics. The subjects were basically unable to tell the difference between the two articles, and indeed on average found the computer-generated article more credible. Clerwall concludes: "Perhaps the most interesting result in the study is that there are no (with one exception) significant differences in how the two texts are perceived by the respondents. THE LACK OF DIFFERENCE MAY BE SEEN AS AN INDICATOR THAT THE SOFTWARE IS DOING A GOOD JOB, OR IT MAY INDICATE THAT THE JOURNALIST IS DOING A POOR JOB — or perhaps both are doing a good (or poor) job?" He asks a provocative and, for many in the media industry, scary question: "If journalistic content produced by a piece of software is not (or is barely) discernible from content produced by a journalist, and/or if it is just a bit more boring and less pleasant to read, then why should news organizations allocate resources to human writers?" "Inferring the Origin Locations of Tweets with Quantitative Confidence": From Los Alamos National Laboratory and Illinois Institute of Technology, presented at ACM's February 2014 Computer Supported Cooperative Work conference. By Reid Priedhorsky, Aron Culotta, and Sara Y. Del Valle. This paper demonstrates that, although only a tiny fraction of people enable geolocation on their tweets, it is algorithmically possible to figure out where you are tweeting from, working from just proximal cues (particularly the mention of toponyms, or placenames.) The researchers analyze 13 million tweets and figure out the basic thresholds they need to infer location. Priedhorsky, Culotta, and Del Valle note that the findings have implications for privacy: "In particular, they suggest that social Internet users wishing to maximize their location privacy should (a) mention toponyms only at state- or country-scale, or perhaps not at all, (b) not use languages with a small geographic footprint, and, for maximal privacy, (c) mention decoy locations. HOWEVER, IF WIDELY ADOPTED, THESE MEASURES WILL REDUCE THE UTILITY OF TWITTER AND OTHER SOCIAL SYSTEMS FOR PUBLIC-GOOD USES SUCH AS DISEASE SURVEILLANCE AND RESPONSE." Related: Also see other interesting ACM conference papers such as "The Language that Gets People to Give: Phrases that Predict Success on Kickstarter" and "Designing for the Deluge: Understanding & Supporting the Distributed, Collaborative Work of Crisis Vounteers. (A special thanks and hat tip to Meredith Ringel Morris of Microsoft Research, who co-chaired the papers committee for the conference.) "An Empirical Study of Factors that Influence the Willingness to Pay for Online News": From Universidad Carlos III de Madrid, Spain, published in Journalism Practice. By Manuel Goyanes. Goyanes analyzes a random sample of 570 survey interviews done by the Pew Research Center to see how demographics and media use relate to paying for news and other online goods and services. YOUNGER PEOPLE, AND THOSE WITH INCOMES ABOVE $75,000, WERE MORE WILLING TO PAY FOR ONLINE NEWS. Twitter users showed an increased willingness to pay. Goyanes states that "news organizations [should] consider Twitter not only a mechanism to distribute breaking news quickly and concisely, but also a marketing and interactive platform with which they can convince new customers to pay for their content through innovative marketing and advertising campaign." "Facebook 'friends': Effects of social networking site intensity, social capital affinity, and flow on reported knowledge-gain": From San Diego State University, published in The Journal of Social Media in Society. By Valerie Barker, David M. Dozier, Amy Schmitz Weiss, and Diane L. Borden. The study adds to the growing and voluminous literature on the human motivations behind activity on social media. The researchers set out to assess what makes people learn things on Facebook, and under what conditions they are more likely to acquire knowledge. Which quality is most important? As you might expect, it's the desire to connect. The study analyzes a subset of data (236 persons) from telephone surveys with Internet users conducted in 2012. Barker, Dozier, Schmitz Weiss, and Borden conclude that it is not the intensity of participation in social networking sites that is the crucial factor in driving them to pick up knowledge. WHAT MATTERS MOST, IT TURNS OUT, IS A "SENSE OF COMMUNITY AND LIKENESS FELT FOR WEAK TIES ONLINE" — social capital affinity — in terms of acquiring knowledge both through focused tasks and incidentally. _Photo by Anna Creech used under a Creative Commons license._
How a crime becomes political: Trayvon Martin and the way different media co-create the news
I am newly returned from Baltimore and the NICAR conference, where one of the most laugh-out-loud sessions of the weekend involved Brian Abelson, Joe Kokenge and Abraham Epton talking about how and why to build Twitter bots. Stephen Suen, of MIT's Comparative Media Studies writing program, has a helpful blog post about the conversation. Kokenge laid out the basics of making a bot. Epton talked about his ILCampaignCash, a Chicago Tribune product that tracks and tweets campaign donations. Abelson offered a long list of bots both humorous (like @FloridaMan or @Haikugrams) and practical (like @TreasuryIO or @YourRepsOnGuns) that suggested the breadth of possibility when it comes to bots. There are also, of course, challenges:
Brian says the logic behind the Twitter bot is strict rather than greedy. He also points to issues faced with Times Haiku. “The challenge is, how are we not going to make a haiku of the Syrian civil war, how are we not going to make a haiku of something that’s serious… that’s why it’s easier to do some of these funny artistic ones rather than something you can put the name of a newsroom on.” Once again, rate limiting is brought up — Abraham says you can write the logic of your bot to avoid having your account get deleted. "Use common sense," he says. The more you avoid behaviors that make your bot seem like a spam bot, the safer your account will be. Joe and Brian agree — the rate limit is high enough that you can get away with a tweet every 5 minutes without hitting it.
Some people who scrape and publish information from the Internet go to jail. Others produce great journalism. It's easy to understand why you might want to know which person you are — and whether or not you're protected from prosecution or not — but that can be a difficult task. That's why there was a discussion on the topic at the Computer Assisted Reporting conference in Baltimore last week. ProPublica's Scott Klein, Scripps Howard's Isaac Wolf, and defense attorney Tor Ekeland participated in a conversation moderated by The Wall Street Journal's Jeremy Singer-Vine.
Wolf is a Scripps News reporter who garnered some attention last spring when he reported on a major security breach at a company called TerraCom. In the course of a typical PDF search, Wolf discovered that personal information including Social Security numbers, addresses, and other account information had been left vulnerable. Publishing his findings led Wolf and his colleagues to be branded as "hackers." Sarah Laskow wrote in CJR that the Scripps case may well be the first time a journalist was threatened under the Computer Fraud and Abuse Act. The Computer Fraud and Abuse Act is a law that prohibits unauthorized access to information on a protected computer. It's the statute under which Andrew Auernheimer, better known as weev, was prosecuted and sentenced to 41 months in prison for taking evidence of a security flaw in AT&T that left user email addresses vulnerable to Gawker. (It's also the law that led to the prosecution of Aaron Swartz.) One of Aurenheimer's attorneys was Ekeland, who provided a legal perspective for the journalists at NICAR on issues around the CFAA. "It's a very dangerous statute, because it's so poorly written," Ekeland said, "and they're about to make it worse."
.@TorEkelandPC on prosecution of scrapers: Not an objective standard, subjective stand that turns on whim of the website owner. #NICAR14 -- Tyler Dukes (@mtdukes) February 27, 2014Klein and Singer-Vine are both journalists who have worked on or edited stories that involved, in different ways, practices that could fall under the hacking umbrella. For example, ProPublica published MessageMachine, a project that used reverse engineering to figure out why certain people received specific personalized emails from the Obama campaign. Singer-Vine worked on a story about online pricing inequality on the Staples website. The focus of the panel discussion was around how journalists interested in doing this kind of work can protect themselves and ensure that they're on the right side of law. Because the law is nonspecific in its language — and widely decried as outmoded — interpretations of what's legal and what's not vary wildly. "The press is protected by virtue of the fact of who they are," Ekeland said. "I don't see any difference between what my client did and what Isaac did, except my client is an asshole." At ProPublica, there are deliberate rules about how a journalist seeking information online should represent themselves. Klein said that reporters there are banned from creating "straw men," or programs that falsely suggest the existence of an actual person. That's why, for the MessageMachine project, users were crowdsourced, and their information — information pertaining to real people — was used to analyze the campaign email algorithm. "I don't feel like it would have been morally wrong to create straw people, but I can see why adopting these moral ethics…makes sense," Klein said. (Klein said they ultimately realized that creating fake users wouldn't have worked anyway, and that the crowdsourced user base has more value and longevity.) At The Wall Street Journal, Singer-Vine said he had a similar debate over self-representation. Ultimately, his team tracked Staples price differentials by modifying the cookies the system relied on to track users, a technique that they felt was significantly different from creating straw men. Whether a judge would consider that action acceptable under the CFAA or is less clear. "Go find a journalism ethics book that says when you can find and manipulate a variable in a cookie," said Klein. "Good luck! We're working without a net." It's worth noting an argument introduced by Ekeland on this topic. Framing the issue as a journalist lying to a computer, perpetuates the notion that they're dealing with something other than a computer. In point of fact, machines don't have a sense of truth — there are only inputs and outputs. "The computer isn't being deceived, it's doing what it was programmed to do," he said. "We want there to be physical, real world analogies, but the computer people don't do that." Not all agreed, however:
.@TorEkelandPC says you can't deceive or lie to a computer, but what about SEO black hats … definitely lying to the googlebot #nicar14 -- Nick Diakopoulos (@ndiakopoulos) February 27, 2014Ultimately, the conventional wisdom seems to be that reporters hoping to stay out of court should be very upfront about their intentions, conservative in their judgments, and confident in the value of what they're doing.
Klein, for example, explained how easy it can be to violate the law accidentally. ProPublica was working with a series of FCC filings at one point while developing a story about who pays for campaign TV ads. The stations are required to make this information publicly available, which is how ProPublica acquired the documents, only to discover later that scanned personal checks were included in the PDFs. Luckily, their reporters realized in time, and were able to do a search for the phrase "pay to the order of," and delete the information from DocumentCloud. Clearly, there's a need to proceed with caution as journalists continue to gain access to sensitive documents that are publishable on the web in full.
Crowdsourcing campaign spending: What ProPublica learned from Free the Files
While the ethics of various methodologies were up for debate, and while interpretation of the law remains opaque, the panelists largely agreed on how journalists can best protect themselves right now. "You want to be able to demonstrate that you're using this information for a journalistic purpose," said Wolf. "Assume that you're going to be challenged. What is your story? You're going to be prodded by the entity or company. Reporters elsewhere are going to be asking you questions." In addition, he recommends keeping track of process, so that a step-by-step narrative of what steps were taken and why can be presented if necessary. Journalists are protected, but ultimately, they're only safe if it can reasonably be proven that leadership at their organization concurred that the measures taken were in pursuit of the public good — that the information is, in Scott Klein's words, "not gossip — it's not prurient." Just last month, the Department of Justice communicated its interest in working to narrow the scope of the CFAA. There are multiple cases in appeals court; as rulings come down, and as lawmakers push for reform, the hope is that the law will become less vague. As Wolf pointed out, if journalists want to be a part of shaping a statute that has the potential to curtail their tools for gleaning information, now is the time to get involved. _Image of a gavel by Joe Gratz used under a Creative Commons license._
Warren Buffett's Berkshire Hathaway has, over the past few years, bought up dozens of newspapers, with 69 papers and other titles currently part of the BH Media Group, including the Richmond Times-Dispatch, Greensboro News & Record, Omaha World-Herald, and Tulsa World. In the 2012 edition of his legendary annual shareholder letter — seriously, their level of clarity is something most journalists can only aspire to — Buffett went on at some length about the purchases:
Newspapers continue to reign supreme, however, in the delivery of local news. If you want to know what’s going on in your town — whether the news is about the mayor or taxes or high school football — there is no substitute for a local newspaper that is doing its job. A reader’s eyes may glaze over after they take in a couple of paragraphs about Canadian tariffs or political developments in Pakistan; a story about the reader himself or his neighbors will be read to the end. Wherever there is a pervasive sense of community, a paper that serves the special informational needs of that community will remain indispensable to a significant portion of its residents.On Friday afternoon, the latest edition of Buffett's shareholder letter was released, and I went to it quickly to see what new thoughts there might be inside from the Oracle of Omaha about the newspaper business. The answer: _nada_. The only reference of newspapers was a pitch for BH's "third International Newspaper Tossing Challenge." Buffett's got a good newspaper-tossing arm:
Now, there's certainly no shame in being left out of a Berkshire Hathaway shareholder letter. The reach of Buffett's empire is so broad and diverse that expecting a newspaper update every year is a bit like expecting the Roman senate to demand the latest from a small town in Mauretania Caesariensis at every meeting. But the newspaper business can use all the outside business smarts it can get these days. Berkshire Hathaway is of course famous for letting its component businesses run themselves, but I think this year's absence of attention might be a tiny, _tiny_ piece of evidence that Martin Langeveld was right when he characterized Buffett's interest in newspapers this way in our year-end Predictions package:
Understanding the billionaire media gambles
I think Warren Buffett is really pursuing a mop-up strategy. He says otherwise, of course: “Wherever there is a pervasive sense of community, a paper that serves the special informational needs of that community will remain indispensable to a significant portion of its residents.” What else is he going to say? He may actually believe this, and believe that printed newspapers will remain viable for a long time, and may prefer to read news on paper like most people in his generation. But Buffett’s backup strategy is this: He is buying newspaper assets cheap and not investing much into them, in the expectation that even if they lose all value over the next 6 or 8 years, he will have made a decent return on his investment… Warren Buffett will continue buying newspapers wherever he can do so very cheaply. No grand strategy, no new business models for news will emerge from Omaha. Ultimately, these papers will be closed or sold. It’s a mop-up.
BuzzFeed editor Ben Smith came up to Cambridge this week and gave a talk at the Nieman Foundation about the site's evolution from a meme factory into a meme factory that also reports on events in the Crimean peninsula. Our friends at Nieman Reports have video of the full event (featuring a few questions from us Nieman Labbers) and a BuzzFeed-inspired selection of quotes. This, for instance, is true:
My actual day-to-day view is that every single piece of content is competing with every single other piece of content all the time.As is this:
One of the advantages of starting from scratch is that you can rethink beat structures. Gay rights is this huge story of the last 10 years, but it’s covered as a B-list beat at a lot of publications just because it always has been. For us, it’s very much a frontline beat, and we’re able to hire the best reporters who really own that beat.Much more at Nieman Reports. While he was in town, Ben also gave a talk at the Shorenstein Center over at the Kennedy School; you can hear audio of that here, read Perry Hewitt's thoughts here, and see a Storify here. And, if that's still not enough GIFs for you, check out our archive of BuzzFeed-related pieces here on the Lab.
As dean of the UC Berkeley Graduate School of Journalism, Edward Wasserman's goal is simple: "I'm trying to turn out journalists who get Pulitzer Prizes." But with limited resources, achieving that goal requires prioritizing initiatives. And that's what happened at Berkeley this week, as Wasserman said the school was ending its involvement with Mission Local, the hyperlocal news site it has run in San Francisco's Mission District since 2008. Mission Local will be spun off as an independent, for-profit site. "Mission Local was moving toward a stand alone operation," Wasserman said. "That's testimony to their success."
(We didn't get a chance to talk with Wasserman before our story yesterday on the move, but we connected Thursday evening and wanted to share his perspective.) Many saw Mission Local as a prime example of the teaching hospital model of journalism education; just as medical students can provide services to their community, so could journalism students. Wasserman praised Mission Local for the work it was doing in the Mission, but said many aspects of running a successful local news organization — covering community events and businesses or marketing the publication, for example — do not help in training journalists who are able to tackle hard, complex stories. "The question then becomes: Do you face a choice at a certain point of providing that news site what it needs to fulfill that function in the community, or giving your students the training they need to be the kind of high impact, sophisticated, well-trained reporters going after difficult stories?" Wasserman asked. Wasserman also emphasized that Mission Local was not a core component of Berkeley's curriculum. Students were required to work for Mission Local — or one of Berkeley's two other hyperlocal sites — as one of their first-semester courses. They could choose to continue working for the site for another semester, but it was optional, he said. And continuing to run the sites year-round distracts from the school's chief mission of educating students.
Cut loose by UC Berkeley, hyperlocal site Mission Local looks to spin off as a for-profit
"Part of that teaching hospital model is a kind of implied obligation for a year-round service," he said. "Who pays for that? I don't have the money for that. I'm trying to throw as much money as I can to financial aid for my students." And while Wasserman acknowledged that it is important for students to understand the business side of journalism, with only two years to provide a comprehensive education, he argues there isn't enough time to fully educate students in areas like marketing, circulation, and other areas critical to running a news organization — especially with a glut of digital tools that students need to master to be successful reporters. "You don't expect lawyers to get of law school and understand how to run a law firm," Wasserman said. "You don't expect doctors to run a hospital." Berkeley runs two other hyperlocal news sites, Oakland North and Richmond Confidential, which are closer to the school's campus, and Wasserman said the school is still figuring out the best way to utilize those resources. He said they will likely continue in some form, but could be consolidated into a school-wide site focusing on the entire Bay Area or focus on certain coverage areas like criminal justice, civil justice or healthcare. "The teaching hospital model is a noble thing," Wasserman said. "There may be elements of that we want to retain, but it's not some template that you simply apply and follow."
J-schools: Success in news today is about a lot more than reporting and writing
THE COMCAST/NETFLIX DEAL EXPLAINED: Two weeks after Comcast announced it would buy Time Warner Cable and a month after a federal court overruled the U.S.' net neutrality regulations, Comcast signed an agreement with Netflix in which Netflix will pay Comcast for a direct traffic-sharing connection to its network in order to improve the quality of its streaming video. The deal, called "paid peering" or "transit," is likely to be the first of several for Netflix, as Verizon and AT&T both quickly said they're negotiating similar arrangements with Netflix as well.
The Lab's Ken Doctor looked at the business end of the deal: Netflix is clearing hurdles to its video streaming quality as it prepares to introduce additional tiered pricing, and Comcast is removing Netflix as a possible objector to regulatory approval of its purchase of Time Warner Cable. Variety's Todd Spangler said Netflix has now fixed some of its key costs and is solving its biggest streaming quality problems, though Peter Cohan of Forbes said the deal doesn't tell us much about how Comcast will treat other video-streaming services, especially after its Time Warner merger. If you want the really deep dive on the agreement, read Dan Rayburn's post with the details. It's important to note that this deal would not have been covered by net neutrality regulations. As CNET's Marguerite Reardon and Consumerist's Chris Moran explained well, peering isn't about stopping intentional slowdowns of traffic quality or about giving preferential treatment to some services, both things that net neutrality would be built to stop. Instead, it's about Netflix being allowed to connect its own content delivery network — most companies pay for third-party networks to deliver their content around the web, but Netflix has built its own to account for its incredibly high volumes of data — directly to Internet service providers like Comcast. That doesn't mean it doesn't raise concerns about the future of the Internet, however. The Washington Post's Timothy B. Lee argued that deals like this transform the Internet from its classic structure in which all sites' content flow together to ISPs through a few big "pipes" — the structure on which the net neutrality ideal was built — to one in which each major content provider uses its own pipe which can be easier to individually manipulate. Gizmodo's Eric Limer said this deal relies on both sides' size and encourages further consolidation: Comcast had enough leverage to sit back and wait for Netflix to pay up to fix its streaming quality problem, and Netflix was able to solve it being big enough to build "its own private highway." Now, he wrote, "ESTABLISHED CHAMPS WHO CAN PAY FOR A SEPARATE TUBE HAVE THE ADVANTAGE OF NOT HAVING TO FIGHT WITH A BUNCH OF OTHER TRAFFIC. IT'S ABOUT TO GET HARDER THAN EVER FOR SOMETHING LIKE NETFLIX TO COME ALONG AGAIN." Free Press pointed to the deal as evidence of the need for stronger anti-consolidation regulatory forces in Washington, and The New York Times' Vikas Bijaj made a similar point in calling for the FCC to revisit its net neutrality stance. On the other hand, Wired's Robert McMillan argued that smaller players may not need or want a direct connection to ISPs like Netflix has, since they already pay third-party networks for that same connection and don't stream nearly enough data to make it a serious problem like Netflix has. StreamingMedia's Dan Rayburn said there's nothing nefarious or threatening to net neutrality about this deal; Netflix is just shifting its costs for connecting to Comcast's network from a third-party network directly to Comcast. "This is how the Internet works, and it’s not about providing better access for one content owner over another," he wrote. In a follow-up post, Rayburn said this is a win for consumers more than anything, as we get better streaming quality and it costs Netflix less over time to give it to us. Elsewhere in telecommunication, The Wall Street Journal reported on telecom giants' fight against net neutrality laws in Europe and the U.S., and at In These Times, Jay Cassano and Michael Brooks said net neutrality's erosion will disproportionately impact the mobile Internet and therefore lower-income people who depend on it. AN FCC NEWS STUDY BECOMES A POLITICAL FOOTBALL: The controversy surrounding a proposed U.S. Federal Communications Commission study continued to boil over late last week into this week, prompting the FCC to suspend and revamp the study. The flareup started earlier this month with an op-ed in The Wall Street Journal by FCC commissioner Ajit Pai that raised an alarm about the FCC's wide-ranging proposed study on "critical information needs" of communities, which included plans to interview journalists about how they select stories and what their news organization's philosophy is. Pai said those questions represented an inappropriate government intrusion into newsrooms and raised the specter of government policing of journalism. The concerns were picked up widely across conservative media outlets, and the outcry led the FCC to axe the interviews of journalists, though it still plans to go ahead with the majority of the study involving surveys and interviews of citizens about the news they get. Even after that concession, a Republican congressman said he plans to hold hearings on the study and introduce legislation to block it entirely. Others weighed with their views as well: USA Today's Rem Rieder said the study isn't an Obama-driven plot to control the press, but a poorly-thought-out attempt to determine whether citizens are getting key information. "The last thing we need is journalism cops flooding into newsrooms to check up on how the sausage is being made," he wrote. Likewise, The Atlantic's Conor Friedersdorf said the study hardly portends the return of the Fairness Doctrine, but looks like a waste of public money regardless. FCC commissioner Mignon Clyburn defended the study, saying it's simply meant to determine if there are any barriers to market entry keeping communities from receiving important information. Likewise, Wisconsin professor Lewis Friedland, who led the literature review that preceded the study, told the Columbia Journalism Review there was no government monitoring, intimidation, or coercion ever intended with the now-dropped journalist questions — "it was simply to get their point of view of how they understood the information needs of their local communities." Techdirt's Karl Bode delved into the study proposal and concluded that "It's a fairly routine and entirely voluntary field survey designed to gather data. Nothing more." Bode chided the FCC for kowtowing to conservative pressure to gut the study. WHY PIERS MORGAN NEVER CLICKED FOR CNN: The New York Times' David Carr reported this week that Piers Morgan, the former editor of the defunct British tabloid News of the World, will have his prime-time CNN show canceled this spring. Morgan was given the key slot once occupied by Larry King, and the 81-year-old King told The Daily Beast's Lloyd Grove he'd be willing to come back if CNN would have him. Carr surmised that Morgan's show never took off because his irrepressible Britishness never fit with an American audience and that his tirades against guns "have clanked hard against the CNN brand, which, for good or ill, is built on the middle way." Time's James Poniewozik said Morgan's show was rife with problems, including his Britishness, his abrasive personality, and his longform style. Slate's David Weigel argued that Morgan was ultimately a poor interviewer — either too deferential or too bullying — and The Washington Post's Erik Wemple said that without a coherent overarching perspective, Morgan's show was left to rely on the devalued commodity of the long-form interview: "IN TODAY’S AMERICA, THERE ARE SO MANY OUTLETS PRODUCING INTERVIEWS, SO MANY OUTLETS FOR YOUR MESSAGE — THAT AN HOUR-LONG INTERVIEW PROGRAM IS ALMOST PROGRAMMED FOR OBSOLESCENCE." OPTIONS TO BREAK UP THE NSA: There were a couple of new revelations on government surveillance this week: Glenn Greenwald at The Intercept published documents from the Edward Snowden leak that detail how a group of the British spy agency GCHQ plants false information online to ruin the reputation of its targets and infiltrate online discourse to try to drive targeted groups apart. A German paper also reported that the U.S. National Security Agency has stepped up its spying on other German officials after it was told by President Obama not to spy on German President Angela Merkel. The Wall Street Journal (paywalled) reported that the Obama administration is considering overhauling NSA surveillance in a variety of ways. Gizmodo has a good, quick summary of the options Obama is considering — let the phone companies oversee the phone metadata collection, letting a different federal agency hold the data, letting a different third party hold the data, or abolish the data collection program completely. At CNN, cybersecurity expert Bruce Schneier offered his own plan for breaking up the NSA that includes moving all surveillance of Americans to the FBI to bring it under U.S. law. READING ROUNDUP: A few other conversations and developments that bubbled up this week: — Reddit is testing a live blogging-style update form for breaking news stories, something Gigaom's Mathew Ingram said could be a real boon for the site and for social journalism. PandoDaily's Nathaniel Mott said technical changes won't necessarily change the site's spotty track record for accuracy on news events, but Circa's Anthony De Rosa said Reddit shouldn't be dismissed as a potentially valuable link in the online news chain. — Upworthy ran a correction this week for a faulty video it had run earlier, and it was distinct in that it consisted mostly of complaints from readers interspersed with apologetic GIFs from Upworthy staffers. Poynter's Craig Silverman looked at the debate surrounding the correction and talked to Upworthy about why it made the correction that way. Upworthy's Matt Savener also defended the site's track record and editorial process.
The Comcast/Netflix deal: Twin looks in the rearview and at the road ahead
— Politico's Dylan Byers wrote a thorough piece on the failures of text-based news sites in producing compelling live video, and the Lab's Joshua Benton looked at the consumers' side of the problem as well.
Why isn't live video working for news sites?
— Finally, a few thought-provoking pieces from the week: The Atlantic's Robinson Meyer and the Lab's Joshua Benton on the boxy style that's ubiquitous in newly redesigned news sites, Stack Exchange founder Jeff Atwood decried the proliferation of dumb apps, and journalism professor Jeff Jarvis gave some prescriptions for the relationship between philanthropy and news. _Photos of a Netflix envelope by Scott Feldstein and of Piers Morgan in a Burger King ad by Cow PR used under a Creative Commons license._
The plague of uniform rectangles with text overlays spreads further, risks becoming news-web-wide contagion
The decision by the University of California Berkeley Graduate School of Journalism to cut loose Mission Local, one of the three local news sites it used to train journalists, is sad but unsurprising. The school faces plenty of financial pressure and the costs of running the sites — funded initially by grants from the Ford Foundation — has long been an issue. In a memo to the journalism school community, Dean Edward Wasserman gave a number of reasons for the decision. He cites costs and the distance between the Mission District in San Francisco and the school's base on the Berkeley campus. Wasserman's third reason, however, was particularly disheartening.
Cut loose by UC Berkeley, hyperlocal site Mission Local looks to spin off as a for-profit
Third, the natural evolution of the site itself is toward being an integrated media operation, and that requires sustained attention to marketing, audience-building, ad sales, miscellaneous revenue-generation, community outreach, special events, partnerships, and 1,001 other publishing activities that are essential to any site's commercial success. That's not really what we do. Those are specialized areas, and the J-School doesn't have the instructional capacity to teach them to a Berkeley standard of excellence. What's more, our students wouldn't have the curricular bandwidth to learn them—not unless we pared back other areas, and redefined our core mission as something other than journalism education.I have a dog in this fight, as one of the founders of Berkeleyside, an independent, local news site in Berkeley. Our ability to report the news remains at the core of what we do, but without awareness, adaptability, and a modicum of skill in those "specialized areas," we would have sunk without a trace very quickly. The same equation is true for anyone working at the many new ventures in journalism, even when they are vastly better capitalized than Berkeleyside, whether it's Walt Mossberg and Kara Swisher at Recode, Ezra Klein at Project X, or Sarah Lacy at Pando. What was so encouraging about Mission Local and its equivalents was that they gave j-school students a much better approximation of the real world of journalism today than internships in big newsrooms. It's a crazy scramble with scarce resources, juggling traditional reporting, video, and social media with hands-on engagement, building community and partnerships, figuring out the place of events, understanding the need for revenue, and so much more. I don't expect every j-school graduate to create their own news operation, but I hope some of them eventually do. I'm certain, as well, that the majority of them will need to be familiar with those 1,001 other tasks if they are to thrive in the media world that is being formed today. The alternative, suggested by "That's not really what we do," is to rely on graduates schooled in business or technology to forge the new models that journalism needs. Let the Tim Armstrongs of the world figure it out. I'd rather see the journalists take the lead.
Lance Knobel is a founder of Berkeleyside, the independent local news site for Berkeley, CA, and curator of the Berkeleyside-run Uncharted: The Berkeley Festival of Ideas._Photo of gathered reporters by Philippe Moreau Chevrolet used under a Creative Commons license._
Five years ago, in the worst days of the economic collapse, Len Downie and Michael Schudson wrote their benchmark report "The Reconstruction of American Journalism," attempting to chart a course forward for a news business in trouble. One of their major recommendations was that universities should become more engaged in producing reporting for their communities. If their teaching hospitals could both train future doctors and serve the public's health, why couldn't journalism schools fill some of the holes newspapers were leaving behind while training future reporters? One of the examples Downie and Schudson cited favorably was the Graduate School of Journalism at the University of California at Berkeley, where journalism students were "reporting in several San Francisco area communities for the school’s neighborhood news Web sites." While some j-schools have embraced the teaching hospital model, this week Berkeley announced that one of those neighborhood sites, Mission Local, would no longer be attached to the j-school. Instead, it'll be spun off at a private entity with a less-than-certain future, no longer getting student reporters as part of the school's course offerings. Dean Edward Wasserman said in a memo that the move was prompted by Mission Local's cost and because it distracted students from the core curriculum of the program. The site, which covers San Francisco’s Mission District, will relaunch as a for-profit. “It’s now time for Mission Local to take the next step and re-launch itself as an independent, stand-alone media operation,” Wasserman wrote. “That means ending its role in the J-School’s curriculum. While [Berkeley professor Lydia] Chavez would have liked to see the school keep the site, she is ready to assume responsibility for the site, and we expect that it will continue under her ownership.” Chavez said the site will continue to experiment and try to find a sustainable model to support quality local journalism and provide young journalists learning opportunities. She said she's in the process of seeking investors; she declined to discuss her plans in depth, as they are still in the works. "It would’ve been wonderful to have this site, to have all of the sites, really continue to experiment and grow in the community that we’re in and to represent Berkeley, but you have have to have someone who is really strongly behind them, and the new dean is not," Chavez told me. "He has other ideas that I’m sure will be exciting, so we’ll see what his ideas are." With funding from the Ford Foundation, Berkeley launched Mission Local in 2008 — along with a number of other sites covering other underserved neighborhoods in the Bay Area — to provide students with hands-on reporting experience in communities that are not typically covered by larger outlets. Whether the school will continue to support Oakland North and Richmond Confidential, its two other hyperlocal sites, is “up in the air at the moment” as the school reconsiders its curriculum, Wasserman wrote. In his memo, Wasserman, who was appointed dean in January 2013, gave three specific reasons for ending Berkeley’s involvement with Mission Local: * IT’S EXPENSIVE: Berkeley pays to operate the site all year, even though the graduate class affiliated with Mission Local is offered only for the fall semester. “The curricular value to our students is limited or even, at times, non-existent,” Wasserman wrote. The rest of the time the site is run by a smaller corps of Berkeley students as well as a mix of students other local universities, freelancers, community contributors, and interns. “It was a fantastic educational tool, but it was not inexpensive,” Chavez said, adding that the program at Berkeley is quite small, with only 55 to 60 students per class. * IT KEEPS STUDENTS AWAY FROM CAMPUS: Berkeley is “bulking up and enriching” its program with new courses, additional speakers, better career services, and more, Wasserman wrote. So sending students off campus for class is not beneficial to their education, he argued. * IT DOESN’T FIT INTO BERKELEY’S CURRICULUM: Over the course of its existence, Mission Local has evolved into a full-fledged media organization requiring marketing, ad sales and other business side activities, which aren’t part of the school’s curriculum, Wasserman wrote. Mission Local has produced a print zine, it created an app that gives tours of the Mission District, and it’s even launched a Spanish language version of the site. “Those are specialized areas, and the J-School doesn’t have the instructional capacity to teach them to a Berkeley standard of excellence,” Wasserman wrote. He added: "What’s more, our students wouldn’t have the curricular bandwidth to learn them—not unless we pared back other areas, and redefined our core mission as something other than journalism education."
Wasserman did not respond to requests for comment. _(UPDATE, FEB. 28: We did have a chance to talk with Wasserman after this story was published; see his comments here.)_ The teaching hospital model has gotten a lot of attention in recent years, in large part because of the work of the Knight Foundation’s Eric Newton and others in philanthropy who see local coverage as a useful extension of j-school's educational mission. _(Disclosure: Knight is a funder of Nieman Lab.)_ Jan Schaffer, executive director of J-Lab at American University, said that Mission Local has been among the best of its kind.
Berkeley dean: Teaching hospital model isn't "some template that you simply apply and follow"
Schaffer lauded Mission Local’s frequent updates as well as its attempts to experiment with different products in various mediums. “Nobody else that I know of does that,” she said. “Nobody else that I know of does that level of content.” She said that sites like Mission Local are about "learning it on the ground. Learning it everyday. Learning how you distribute a hard copy newspaper, how many donations are coming, how many volunteers you need to make it work, how to write a grant proposal, how to sell an ad. Even if they’re not themselves doing that, just an awareness of that landscape is very valuable." Many, including a number of former Berkeley students, said they were concerned about how the school would replace Mission Local in the curriculum:
J-schools: Success in news today is about a lot more than reporting and writing
@MLNow Can't imagine J-School curriculum w/out ML. Am curious direction J-School is going that involves getting rid of amazing reporting lab -- Jamie Goldberg (@Jamiebgoldberg) February 26, 2014
My experience with @MLNow was one of the best things I got out of UC Berkeley's J-School program. -- marta f. (@marmotilla) February 26, 2014Still, Wasserman emphasized in his memo that the school would continue to prioritized educating students on the business of journalism as well as on "improving on what we’ve done in the past, and making sure the future offers opportunities here at least as rewarding and memorable as theirs have been." _Photo of a Mission District mural by Gwendolen Tee used under a Creative Commons license._
Talk about spin. Two of America's once-iconic publishers are about to be spun. Spun off, that is, from parent companies that have fallen out of love with print and in love with moving pictures. The names of the Chicago Tribune and Time magazine may invoke the publishing golden age birthed by the Colonel McCormicks and Henry Luces, but these publishing divisions today are more than tarnished. They've become liabilities, weights on future enterprise,and anchors of low profitability as advertising revenues continue to be eaten away by the Googles and Facebooks of this era. Tribune Company and Time Warner will move on without their namesakes, a plaque or two in the lobby remaining behind to commemorate their illustrious histories. What we're seeing unfold this year is an orphaning of distressed publishing assets: setting them adrift in an inhospitable business climate, thinly clothed and with a heavy bag. Call it publishing orphanage. It's a noteworthy moment in an almost decade-long reckoning with the long slide in both the newspaper and magazine industries. Both Time Warner and Tribune are working through all the financial and legal issues en route to hiving off their publishing assets from their core TV/movies/digital businesses. Both should have the process completed by the middle of the year, probably a little earlier. Both are following in the footsteps of other media splits, including 2013′s News Corp. and 2007′s Belo and Scripps. But both are planning on putting more of a burden on their publishing businesses than we've seen in previous splits, with Tribune's approach standing out as particularly Dickensian. In essence, it's a newer, harsher reality for legacy news operations forced to live on their own. In part, that's a reflection of the challenges that the non-print sides of Time Warner and Tribune face. Adjusted operating income was down in 2013 for Time Warner's HBO and Turner divisions and at Tribune's broadcast operations. Yes, publishing may be distressed — but the TV/video path forward is chockful of competitors too, taking money and customers away at every opportunity. Both Time Warner and Tribune are assigning significant debt to their split-off companies. But debt is only part of the story. The burdens being placed on standalone Time Inc. and Tribune Publishing are several-fold: * DIVIDEND. The new Tribune Publishing will have to pay the bigger Tribune Company a one-time dividend of about $325 million immediately after the split, as the Chicago Tribune's Robert Channick revealed last week. The money will be borrowed by the new company. * DEBT. Time Warner is sending off Time Inc. with about $1.3 billion of it. And we know that Tribune Publishing will have to borrow that $325 million to pay the dividend and take on so-far unspecified additional debt to finance its operations. *
LEASE-BACK. Tribune has already separated out the real estate under and around its publishing operations from its eight newspapers, having figured out that lots of the value of those publishing assets is in dirt. Consequently, when the spinoff happens, the newspapers will have to pay an estimated $30 million in rental costs, through 2017, back to Tribune. Newspaper leases run five years; production facility leases run 10 years. Time Inc. is looking for new cheaper downtown Manhattan office space, although it has paid significant lease costs as an occupant of the Time-Life Building. Neither of the new publishing companies will have solid real estate assets to bank on going forward. * STRIPPING OUT DIGITAL BUSINESSES. While Tribune's newspapers have struggled mightily, along with their peers, Tribune's CareerBuilder and Classified Ventures digital classified businesses have helped offset ad loss. Those businesses, and their significant cash flow, stay with Tribune. To be fair, Tribune has noted in its filings that parent "Tribune is also expected to retain most of its pension assets and liabilities." We'll have to wait and see what "most" means. So, to the question of the day: What difference will the split arrangements have on the journalism that the Los Angeles Times, Chicago Tribune, and Tribune's six other metro dailies produce? What are the chances that Time Inc.'s Time, Fortune, Sports Illustrated, and Entertainment Weekly, among others, will be able to find a high-quality print/digital way forward? Will the readers of those publications be affected by the new financial obligations of the publishers? The answer is painfully simple: Yes. By one measure, the new debt put on the publishing companies is reasonable. Time Warner's overall debt is $18.3 billion; it is assigning 7 percent of it to Time Inc. Tribune's overall debt is $4.1 billion, most of which was incurred in its purchase of Local TV stations last fall; it is assigning 8 percent of it to Tribune Publishing. Both new companies, as others have argued, have sufficient cash flow to make the debt service. We can also add to the justification that as divisions of larger media companies, both publishing divisions contributed to debt service all along. But that argument ignores the reality of 2014. Both publishers have only remained profitable by significant staff cutting. Given that revenues will continue to be down this year for both, somewhere in the mid single digit range, they'll have to continue cutting costs and staff to maintain profitability. The new debt service and lease obligations won't break their backs, but they'll be added new weight on backs already bent. That's in contrast to those other recent publishing splits which were more friendly to the print half. The most recent and instructive parallel is last year's News Corp. split. Pushed by shareholders and Hackgate fallout to split his baby, Rupert Murdoch steered $2.6 billion in cash to the newspaper-heavy company and freed 21st Century Fox to head off into its future. The new News Corp wasn't assigned _any debt_, didn't have to pay a dividend, and kept all the real estate underneath its newspapers. Further, Murdoch threw Fox Sports Australia and digital real estate services into the new "newspaper" company, giving it a couple of growth drivers. Seven years ago, when Belo split off its newspapers as A.H. Belo, it assigned no debt to the newspaper split. Also in 2007, Scripps separated out its newspaper and broadcast properties from its high-flying cable ones. In that case, the new cable business, Scripps Network Interactive, got $325 million of the debt and E.W. Scripps, the new newspaper/broadcast entity, got $50 million of it. Neither Scripps nor Belo separated the real estate from the legacy operations. Why? All three companies realized that the standalone newspaper entities needed every dollar possible to find a future. Arguably, the editorial operations of The Wall Street Journal, The Dallas Morning News, and Naples Daily News are better off for it today. Will we be able to say the same when the Chicago Tribune, L.A. Times, Baltimore Sun, and Orlando Sentinel are set adrift? A few inquiring minds want to know. One of those is Henry Waxman, the Democratic ranking member on the House Energy and Commerce Committee. Waxman raised an alarm about the Tribune's assignment of debt and dividend to its spinoff back in December. While newspaper business matters generally fall outside the purview of Congress and regulators, the committee does provide oversight of the Federal Communications Commission. Waxman's interest, though, is more local: The long time L.A. congressman worries about the future of the local L.A. Times. So Waxman has met with Tribune CEO Peter Liguori, and formally requested documents related to many of the burdens I listed above. There have likely been other staff interactions, and there may be more to come. Waxman is trying to bring a political moral suasion to the Trib spinoff, asking what indeed will be the impact of the Tribune Company's stripping assets of every kind — terrestrial, digital, and financial — from the newspapers. But common sense here is paramount. Almost all legacy publishing companies are, to use the polite term, mature enterprises. More precisely, year after year, they take in less money than they did the year before and the year before that. There's no _extra_ cash lying around. The meager cash flows of these companies goes to: * KEEPING THINGS OPERATING. With less money coming in, staff — the largest expense — has been steadily excised for five to six years. Operating expenses consume most of the revenue. * PROFIT. Almost all legacy publishing companies are profitable. Other than some going into the red in the worst months of The Great Recession, they've kept themselves marginally profitable to satisfy shareholders. Many of those shareholders (buyers of cheap debt or shares, or converters of debt to equity out of bankruptcy) have put stringent workout plans into effect to make sure that profits are paid, even as the workforces and product quality has declined. * INVESTMENT. Capital expenditures are low and the buying of other companies, to augment skills or technology, is now unusual. Yet the best publishers have prudently invested in a product improvement here and a digital platform there in efforts to drive their companies into the digital age. It's the kind of investment that News Corp was able to make in buying Storyful in December for $25 million to aid its reporting, or the kind of buy parent Tribune completed in December when it added Gracenote to its portfolio for $170 million. Innovation requires money. * DEBT. Then, there's debt service, with many companies still paying for ill-advised acquisitions at peak market values. Think of these four as mouths to feed. Publishers must ration food among them, and there's not enough. So the short answer to the question: Yes, imposing new costs — debt service, dividend payments, or lease costs — on these spinoffs will make life harder. While life gets harder, more staff — including more journalists — get cut. The road ahead for Tribune Publishing and Time Inc. will be harder if the proposed debt and dividend plans proceed. The journalistic output is likely to suffer. Readers and communities will get less and less experienced reporting.
The newsonomics of Tribune's detour
Let's do some math, first looking at the Tribune context and its numbers. First, there's Tribune's heavy cutting pre-split. In late 2013, the company announced a $100 million cost-cutting plan _in its publishing division_. That resulted in the elimination of 700 jobs across the eight newspapers, on top of 800 job cuts in 2012. The company made a point of saying that newsroom cuts were a small part of those layoffs, but we know there were at least dozens of them, all on top of cuts that have greatly reduced newsrooms from Hartford to Fort Lauderdale through the Sam Zell era ("The newsonomics of the Tribune's metro agony"). Just as one example, the Baltimore Sun has dropped to fewer than 140 journalists from a peak of more than 400. Tribune has some cash, but it's not expected to give any of it to the publishing entity. The most recent financials we have for Tribune show that the company has about $700 million in cash and cash equivalents. Now let's look at the percentage of profits that may need to go to servicing debt and how much debt service could equal in terms of jobs. Overall, Tribune Publishing generated $150 million in operating profit for the first three quarters of the year, so we can extrapolate $200 million for the full year 2013. Or course, what generated those profits is cost-cutting. To get to that level of profit, it reduced expenses 13 percent — including 230 positions. The new Tribune Publishing can continue to cut — but a further 13 percent would simply continue the hollowing-out process of the company and its newsrooms. Importantly, Tribune's newspapers aren't steady state: Revenues continue to fall. Now the new debt service. Let's say that Tribune Publishing will need to borrow $650 million overall, with half of that borrowing going immediately to Tribune Company as that "special dividend." What might it pay for that money? Lee Enterprises, considered a large well-managed newspaper company, recently refinanced its own debt at 12 percent. (That was actually _lowered_ from 15 percent.) Tribune may have access to cheaper money; let's say it wrangles a 10 percent rate for the new, market-challenged newspaper company. That's a payment of $65 million a year — or a full third of those 2013 profits. That's more weight on Tribune newspapers back. Now let's consider _individual_ backs and calculate that number in terms of jobs. At an average of, say, $75,000 a year, that's 866 jobs. That's out of total of more than 8,000 full-time publishing division jobs. The arithmetic is fairly straightforward. The obligatory debt service could be paid for by having 900 or so fewer employees. Let's say it can limit its new initial debt to $500 million. That would still mean more than 650 jobs. In striving to make the point that it is trying to preserve journalist jobs, the company has pointed out that many of its cuts were in marketing and technology. That may be good for the journalists who would otherwise have been pink-slipped — but those are also two areas where news organizations of the future need _more_ smart investment, not less. In addition, journalist jobs continue to be cut back, even if they are a smaller proportion of the total cuts. Now let's look at Time Inc.'s numbers. Time Warner just announced its full-year earnings. To get an apples-to-apples comparison, we take out the new revenues driven by its fall acquisition of American Express Publishing. For the year, revenue was down about 5 percent, and 8 percent for Q4. Both ad and circulation revenues are tumbling. Time Inc., under new CEO Joe Ripp, has also been cutting in anticipation of going solo. Recently, layoffs of 500 employees, or 6 percent of Time Inc.'s staff, were announced. Ripp has also greatly reorganized the leadership team, putting some impressive new talent in place in key exec and product roles. Just today, we see the poaching of Scott Havens, a highly regarded architect of The Atlantic's renaissance, who becomes senior vice president for digital. What effect might that $1.3 billion in debt have on the ability of that new team to transform Time Inc. into a growth company? Time Warner may well get a better interest rate for its orphan than Tribune, according to knowledgeable observers. Let's peg it at 7.5 percent. That would create an annual debt service of $97.5 million. That's the equivalent of 1,300 jobs — or another 12 percent or so — of Time Inc.'s workforce, if and as revenues continue to decline. There are lots of moving numbers here, but the point is clear: As standalone companies with ever-falling revenue, each will have a more direct responsibility for paying off debt, and the likeliest place to pay for it may well be more aggressive staff cuts. Both Time Warner and the Tribune Company have legal obligations is to maximize shareholder benefit; as recently as this week, a hedge fund began pushing Tribune CEO Peter Ligouri to sell any Tribune asset he can. In these splits, though, we have current shareholders who will have their shares divided. Presumably, from a shareholder point of view, both TW and TRB would want to maximize the chances of _both_ new companies prospering and rewarding shareholders. Is the burden being placed on the spun newspaper assets prudent, given their marketplace and transformation challenges?
The newsonomics of Tribune's metro agony
For Tribune, it's clear that it is going to the spinoff route as a way to save on capital gains taxes when the newspapers are sold. (There are complicated tax issues involved in the spin, which you'll recall came after the Koch Brothers summer spectacular of 2013 — but observers point to a likely sale of the newspapers not long after the spinoff.) For the current Tribune Company and board, the newspapers appear to be a soon-top-be-dispatched afterthought — one they don't want to shine much of a light on. Just on Monday, at the J.P. Morgan Global High Yield and Leveraged Finance Conference in Miami, Tribune made its presentation. Given that the company is going mainly TV, most of the PowerPoint's 25 slides were devoted to broadcast, with real estate, digital properties, and syndication businesses highlighted as well. In the _three_ slides it devoted to publishing, it highlighted but three fairly broad numbers, with two speaking to a 20th-century audience — and none speaking of dollars and cents. * 1 billion newspapers distributed annually * 10 billion preprints distributed annually * 70 million unique online visitors monthly The new Tribune is ready to be done with the old Tribune, just as Time Warner can hardly wait to jettison Time Inc. — showing its Q4 and full-year financials both with and without the publishing assets. The divorces are about to be decreed, and terms of disendearment betray a lost love.
The newsonomics of the Koch Brothers and the sales of U.S.' top metros
The Knight Foundation wants to delay the death of the Internet as we know it — at least for a little while longer. Today Knight is launching the latest installment of its Knight News Challenge, and this round will focus on a subject on many minds these days: how best to support a free and open Internet. Specifically, Knight is asking people how they would answer this question: "How can we strengthen the Internet for free expression and innovation?" Those who come up with a good answer — or at least an idea that can pass muster with Knight's experts and advisers — will get a share of $2.75 million. Knight has funded the contest for media innovation since 2007 and awarded more than $37 million over that span. This time around, Knight is partnering with Ford Foundation and Mozilla to administer the News Challenge. The contest is open to anyone, with a simple application form on newschallenge.org, with a deadline of March 18. Winners of the News Challenge will be announced at the annual MIT-Knight Civic Media Conference this June.
With the FCC attempting to rewrite its open Internet rules after having them struck down by a federal appeals judge, and the pending merger of cable companies Comcast and Time Warner (not to mention Netflix brokering a deal for better service on Comcast broadband network), there has been growing concern about the future of the Internet from consumer advocates and other technology watchers. (Not to mention those three letters N, S, and A and attendant concerns about surveillance and privacy.) "We see the Internet as a really important resource for expression, for learning, for journalism, for connecting to one another as neighbors in the community — we want to make that stronger," said John Bracken, Knight's director of journalism and media innovation. With the events of the past few weeks, the News Challenge might seem particularly timely, but Bracken said protecting the free flow of information has been among Knight's main concerns for years. He pointed to the Knight Commission on the Information Needs of Communities in a Democracy, a collaboration with the Aspen Institute that aimed to "maximize the availability and flow of credible local information" and "enhance access and capacity to use the new tools of knowledge and exchange." Bracken said recent events will only add more urgency to the News Challenge. "Clearly it's a topic on a lot of people's minds," he said. "It'll be exciting to see what it yields in terms of ideas and broadening our idea of the topic." In the most recent round of the News Challenge, which focused on health, applicants were asked to answer the question "How can we harness data and information for the health of communities?" The latest round offers a similarly open-ended question on the subject of the Internet. Bracken said that was done be design to try to spur as many new ideas as possible. (Knight is again using IDEO's OI Engine to channel ideas through the contest.) Making the ask appealing is one part of the equation; another is taking an active approach to finding people to apply. One of the reasons Knight partnered with Mozilla and Ford is to tie into their networks. In the Venn diagram the three organizations share, Internet openness and democratic access to information slots nicely into the middle. Knight and Mozilla already collaborate on the Knight-Mozilla OpenNews. "We want to expand the network of people we're reaching. You look at the Internet and open web and building useful tools, and Mozilla and their community come to mind," Bracken said.
The Comcast/Netflix deal: Twin looks in the rearview and at the road ahead
Bringing partners into the News Challenge is only the latest tweak Knight has made to the competition in the last several years as it re-evaluates the way it funds innovation in journalism. Since 2012, the News Challenge has been broken up from one annual call into smaller, shorter, themed contests. But the long-term future of the News Challenge remains under examination. (As Knight president and CEO Alberto Ibargüen said at the MIT-Knight Civic Media conference last year: "It may be finished. It may be that, as a device for doing something, it may be that we’ve gone as far as we can take it.") Bracken said they're still re-tooling the competition, as well as expanding the funding opportunities for new projects through other programs like the Knight Prototype Fund. "We want to constantly extend the network of people we work with, and one way to do that is collaborating on a new News Challenge," Bracken said. _Full disclosure: The Knight Foundation is a funder of Nieman Lab, though not through the News Challenge._ _Image by Roo Reynolds used under a Creative Commons license._
Opening up government: A new round of Knight News Challenge winners aims to get citizens and governments better information
One of the benefits of your newspaper's publisher also being the owner of a baseball team? He sees spring training as a circulation opportunity. The Boston Globe announced today it will begin delivering the newspaper around Ft. Myers, Florida. That means if you live in Lee or Collier counties, you can get the Globe delivered to you seven days a week or pick it up down the street at the 7-Eleven. Having satellite markets for newspaper circulation — particularly in snowbird paradise — is not entirely unusual; you can, for example, get The New York Post delivered to you in Florida as well. With Nana and Pop-Pop among the ranks of retirees in the Sunshine State, Florida makes sense as a market for out-of-town news. But in this case there's also a bit of John Henry magic at play. The new owner of the Globe is preparing new products for the paper, online and in print. Expanding circulation to Florida could be a boost to both of his franchises, as his newspaper and his ball club (and his newspaper's coverage of his ball club), are coming together for Grapefruit League play. From the Globe's news release:
For the first time, The Boston Globe is available on newsstands in and around Ft. Myers, Fla. - throughout Lee and Collier counties. Beginning February 24, the newspaper will be sold at Walgreens, 7-Eleven, Circle K, CVS, Sweetbay, RaceTrac and Hess stores, as well as through home delivery. Both daily and seven-day delivery options are available and readers can subscribe at bostonglobe.com/florida, and deliveries will begin Monday, March 3.
When Slate decided to get into the daily drive-time podcast business, they decided to take a cue from a proven winner: sports radio. At least the public radio version of sports radio: Specifically, they decided to hire NPR's Mike Pesca for the launch of a new daily podcast — a departure from Slate's other podcasts which are delivered mostly in weekly installments.
Podcasting has been a growth area for Slate, and the site's collection of shows now grab around 2 million downloads a month, according to Andy Bowers, Slate's executive producer of podcasts. By going daily, the site wants to grow that audience further and continue to capitalize on a growing source of advertising for Slate. "Slate loves podcasts," Bowers said. "They do really well for us, and everyone wants to be involved. We want to figure out how to smartly grow without overextending ourselves, but we're going to keep growing." Pesca's new podcast is one step towards that. Everything from the host to the format to when the show will be released, is designed to try to capture the best possible audience, Bowers told me. The currently unnamed show will run 20 minutes (as opposed to up to an hour or many Slate podcast episodes), and be delivered to listeners in the afternoon just in time for the evening commute home, Bowers said. Expect it to launch in April. This means building a new habit in Slate's audience, moving listeners used to a weekly fix to a daily routine. While Slate releases podcasts nearly every day, no single show is produced on a day-to-day basis. Most of Slate's podcasts are released overnight, and listeners typically download them in the morning, Bowers told me. By releasing Pesca's show earlier in the day, they hope to push people into developing a regular habit. Bowers said the show will be topical, with conversations and interviews on the day's news, recognizing that people already get their breaking news in other ways. They want the show to fill in the lines of top stories a little more. "We thought it would be fun to take the most interesting things about the flood of information coming across the screen and drill down more," Bowers said. Though Pesca was already a host for Slate's Hang up and Listen, it's his previous life for NPR that will be helpful in not only making the show lively and accessible, but also in delivering something timely, Bowers said. Slate also plans to bring on a dedicated producer specifically to work on Pesca's show.
Slate doubles down on podcasts, courting niche audiences and happy advertisers
The genesis of Pesca's new show came from an experiment with the popular Political Gabfest podcast in the fall of 2013. During the government shutdown Slate decided to take the show nightly, with a recap of the day's developments on the budget talks between President Obama and Congress. Bowers said the Gabfest Extra shows, which lasted the duration of the shutdown, proved successful, drawing in what he said were "big numbers, much bigger than expected." (Bowers declined to offer more concrete figures.) Gabfest Extra proved Slate's listeners had an appetite for a higher dose of shows. They've since produced "Extra" installments of other podcasts tied to timely events or news. "It made all of us realize there could be a market for a kind of evening, drive-time companion to the day's news," Bowers said. An audience pulling down 2 million downloads a month (granted, downloads is a somewhat murky metric for gauging listeners) may not seem like a lot in comparison to a radio audience. (NPR's All Things Considered — probably the most popular evening drive-time option among much of Slate's audience — gets around 11 million listeners a week.) But at the scale of what Slate is doing, the number looks good. More importantly, advertisers are paying to get on Slate's shows. Slate publisher Matt Turck told Business Insider that podcasts now make up between 5 percent to 10 percent of Slate's advertising revenue. "We're making money, there's great advertiser interest, and we have a lot of ambition for other things we can do," Bowers said.
Pesca's show is likely a sign of things to come for Slate, with another new show planned to debut later this year. Slate's investment in new shows follows bigger trends in podcasting, as hosts and producers are building networks to help grow audience as well as the business end of podcasts. Pesca said people who work in audio are trying to find ways to capitalize on the growing market opportunity that's come with the rise of smartphones and other devices. "I think in years to come, we will see podcasting audiences, or on-demand radio audiences, that are bigger than radio audiences," Bowers said. _Photo by mbschn used under a Creative Commons license._
Welcome to Radiotopia, a podcast network with the aesthetics of story-driven public radio
We have an opening for a staff writer here at Nieman Lab. If you’re interested, apply over here. This job will join our bustling little Harvard newsroom (currently made up of three reporters and me) to report on journalism innovation — innovation in how news gets reported, produced, distributed, discovered, consumed, and paid for. If you enjoy the sort of stories you read here and would like a chance to report, write, and edit them full time, you might be a good candidate. For more details, see my writeup from a previous time we had an opening and, of course, the job listing linked above. A few notes about the position, which is awesome: — While we’ll look at candidates with different levels of experience, I’m particularly looking for someone who would be able to split time between writing and editing. Someone who already has some experience and skill in that part of the job — assigning stories, working with freelancers, editing copy, writing headlines, making art decisions, and so on — would be especially welcome to apply. — This person's work would have a special emphasis on mobile: how smartphones (and to a lesser degree tablets) are changing the landscape of news. — Because of the way Harvard hiring works, the job posting lists this position as "a term appointment ending June 30, 2014." Many (nearly all, I believe) Harvard jobs of this type are officially run as a series of one-year term appointments that end at the end of Harvard's fiscal year. In the five years since the Lab launched, every full-time position we’ve had has been posted under these terms, and every one of them has been renewed every year. Changes in funding could alter that in the future, of course, but if we're happy with the work being done, our expectation is that this person would stay well beyond that date. Don't let that be a hindrance. — To be considered for the position, you must apply at the Harvard HR site linked above, where you should include a cover letter telling me why you think you’d be right for the job. (Don’t email me a resume directly; I’m not allowed to consider anyone who doesn’t go through the official HR process.)
Our friend Jeff Hermes over at Harvard's Digital Media Law Project has a post noting with concern a just-released set of U.S. Department of Justice guidelines around the propriety of investigating journalists:
Although the Guidelines extend certain protections to "members of the news media," they (like the prior version) still contain no affirmative definition of that term. Instead, the only way in which "members of the news media" are defined is through exclusions. A number of these exclusions (predictably) relate to persons acting as agents of a foreign power, plotting terrorist activity, et cetera. [...] This should give independent journalists significant pause. When government agencies attempt to define a journalist, they tend to adopt either an employment-based approach or a functional approach; the DOJ now seems to be eschewing a functional definition (or at least one as broad as in the Privacy Protection Act). [...] This leaves journalists unaffiliated with a news organization on potentially unstable ground with respect to the security of their communications against secret government inquiries.
I appreciate Rob Meyer writing this post so I didn't have to: It seems as though every new news site redesign has a common thread: stories presented as uniform rectangles with a text overlay. (Rob's piece is really about the boxiness, but the text overlay trend is also approaching Defcon 3.) Bloomberg View's redesign today prompted it, but there are plenty of others: NBC News, MSNBC, Gothamist, the top of The Verge, the "premium" version of The Dallas Morning News, Vocativ, Digg, Digiday, the less-than-loved new Slate, and more.
Today In The Continuing Conquest of The News Web By Uniform Rectangular Images With Text Overlays: http://t.co/b8X5odH4rq -- Joshua Benton (@jbenton) February 25, 2014
Someday soon, we’ll all be turned into rectangled photos with white text and a black semi-translucent box over it http://t.co/uaabauONkY -- Joshua Benton (@jbenton) February 24, 2014It's not that boxes are new or anything — the entire web is literally built on them. But it's a clear trend and, I think at least, a slightly dispiriting one. At the same time we're seeing increased creativity in article page design, we're seeing a sort of clotted sameness descend upon front pages and section fronts. It encourages the reader to see everything as just another identical widget of content, I fear. And it, in some cases, limits how much text or other cues one can add to tease a story to the reader. Personally, I find them too scannable and too easily ignorable. Rob suggests responsive design may be a cause; Ethan Marcotte, the guy who, ya know, invented responsive design says no. Far be it from me to dispute Ethan, but I think responsive _has_ generally led to a regularization of front page chunks in order for them to reflow well on phones. And it fits with the cards metaphor we're seeing everywhere. (Thanks, Pinterest.) But I think the bigger culprit here is the rise of tablets — both because web designers are taking cues from tablet apps and because tablet traffic is growing as a share of total traffic. What does tablet design prize? Big, tappable areas. It's a kind of buttonization.
L.C. Angell says he builds websites with himself as the target reader. So when Angell, the creator of the highly curated men's shopping site Uncrate and the viral video site Devour, decided to launch a site covering sports, he had his own interests in mind. Angell wanted a site that could provide a sports news fix about the leagues he cared about, but which was both more digestible than a traditional 700-word article and which could cut through the noise of Twitter. Enter Rookie. First, before you do a spit take: Tavi Gevinson's much lauded Rookie Magazine hasn't suddenly shifted its target audience from teen girls to NFL fans. This is rookie.com, not rookiemag.com. Here's what a Rookie story looks like: On the left, a big photo and a concise paragraph about its topic, like whether the Detroit Lions' Matthew Stafford is an elite quarterback. On the right, a half dozen or more quotes — from players, coaches, executives, pundits, or reporters — discussing the topic at hand. In the case of Stafford, Rookie pulled quotes from ex-players like Joe Namath and Troy Aikman, ex-coaches like Jim Schwartz and Mike Ditka, and others. That's pretty much it: a summary paragraph and a set of quotes. The story as a whole is sharable, but so are each of the individual quotes. "To me, I just don’t even care what my buddies are saying about the game. It's a waste of time," Angell said, explaining one way Rookie differs from Twitter. "I want to know what LeBron James thinks about Russell Wilson winning the Super Bowl. Rookie likes to choose stories that will have a long shelf life, and it keeps updating them with new developments or quotes. A couple weeks ago, it posted a storyline about former Texas A&M quarterback Johnny Manziel and his prospects in the NFL draft. New quotes continue to be added to the post (Jerry Jones thinks he'll be a star), and it'll continue to be updated through Draft Day in May. And Angell envisions stories that span years, such as the NFL's concussion crisis, that could have hundreds of quotes on one page. "That would be an amazing resource to have one URL to that," he said. (Think of it as a low-friction analog to the recent interest in explainers that unite disparate content into a single, digestible web page.) A team of four runs the site, and they aggregate the quotes from other news organizations as well as Twitter and Facebook, linking back to the original source. Since the rise of the web, there's been a low-voltage, long-lasting discussion around what is or should be the "atomic unit of news." In a print world, that was an easy discussion: the article. But in the disaggregated online context, is it the fact? The tweet? The listicle item? The Wikipedia-style summary, unbound by news hook constraints? Could quotes be next? They've played a key role in journalism forever, of course; if aggregators assemble bits and pieces of other people's stories, those stories assemble bits and pieces of other people's comments. Increasingly, they come direct from the source to the public without a middleman: Think of how many athletes use their Twitter accounts for the sort of comments that previously would have gone through a local beat reporter. Their brevity would seem to line up well with social media. (And, from a business point of view, quotes from public figures don't come with a price tag.) Angell is betting the atomic quote has a future, and even wants it to eventually become a platform for athletes to share their thoughts on the sports news of the day without worrying about trolls on Twitter. He said he is not quite yet sure how it would work, but Angell said he has spoken with athletes who have expressed interest in being able to post their thoughts directly to Rookie's ongoing threads about various topics. "Twitter is kind of horrible for professional athletes," Angell said. "In terms of whether they do something good or something bad, these guys can't even look at their '@' messages, basically is what these guys told me. It makes them mad. They want to respond, and then they look horrible for responding to these things." (Of course, some use it for inspiration.) Rookie's Facebook and Twitter followings are still small, and a search for "@Rookie" on Twitter still brings up more results for Gevinson's Rookie than for the sports site. (For what it's worth, Angell says he isn't concerned with his site getting confused with Gevinson's: "Our target audience is different enough that there shouldn't be any confusion." Rookie editors have also taken to Twitter to defend the name.) Angell is convinced that Rookie's choice, timely quotes on the sports news of the day will attract a following. Rookie already covers the NFL, NBA, MLB, and NASCAR, with plans to expand into other pro sports, but not college sports. He says the site's simple design, which is responsive to mobile devices and tablets (an app is in the works), and its striking photography will help set it apart from other websites in the crowded world of sports media. The site has launched without ads, but Angell said there will eventually be advertising on Rookie. Still, he said readers are used to overwhelming sports sites like ESPN.com or Bleacher Report and are not accustomed to Rookie's stripped down design. "A lot of people, honestly, don't get it. I had to put a little how-to on the story pages. It's crazy, but this website is so simple they weren't used to a simple website."
An interesting move from last week: National Journal is getting into the database business. The company's launching a new Document Library, a service that will feature research, white papers, testimony, press releases, and other information that might be useful to people who do business in Washington. The service will be free to subscribers (with non-subscribers getting a limited version) and will be sourced from government agencies, think tanks, trade groups, and universities. It's a smart move, similar to other media companies that have tried to leverage data or primary documents as an advantage and possible revenue source. Here's National Journal president Bruce Gottlieb explaining the library to Folio:
"A big part of their [members and subscribers] job is staying on top of information," he says. "In many cases the source material is just as useful as a write up. What this allows us to do is give people one place to access a direct source in order to stay on top of fast moving, complicated information."
In romance, as in money, it's all about timing. The Comcast/Netflix damn-the-buffering, full-speed-ahead agreement announced this weekend is all about timing. This is a time of major momentum for each company. The last thing either needs is a speed bump. Netflix, with 44 million subscribers and oozing growth worldwide, has distanced itself from its rivals, both with those sheer numbers and the brand buzz of _House of Cards_ and other value-enhancing original programming. In its rearview mirror, it can see Apple, Amazon, Hulu, and HBO, among many others, trying to figure out how to play catchup.
With Internet buzz of slower streaming performance beginning to eat away at the seamlessness of its offering, Netflix knew it had to act sooner rather than later. Remember Qwikster, the company's aborted initiative to split its DVD-by-mail and streaming businesses. Many a business case history has been written on it, but one clear lesson applies here: _Deal with issues as quickly as possible._ Netflix and Comcast had been negotiating on performance issues for as long as a year, but you could hear the crescendo of easy-streaming-is-over conjecture quickly building over the last two months. In addition, Netflix needs to clear its decks for the introduction of higher, and tiered, pricing: Who wants to pay more if something's not just working right — and may get worse? Comcast needs to pick off as many impediments to regulatory approval of its acquisition of Time Warner Cable as it can, as hearings are scheduled and forces like Craig Aaron's Free Press are making their case against it. (Excellent Tom Ashbrook On Point on the nuances of consumer impact of the acquisition here.) Netflix could have been a major impediment. It could have been a poster child of consumer concern that Comcast's soon-to-be-even-more-outsized power was bad for the viewing public. Is Reed Hastings going to make a fuss now? (The agreement to not do so is in invisible contract ink.) Going into the year or so of regulatory hearings, Comcast has to put on its best face of problem-solving, partnering friendliness. There's also Comcast's rearview mirror. Yes, it would become a broadband heavyweight — with something like 40 percent of the broadband market in the U.S. — but that's today, with old-fashioned, slow-as-Slovakia (we're No. 32, they're No. 33) broadband. Take just two announcements over the past few days: On Thursday, Google announced it would be interviewing the city fathers and mothers in 34 new U.S. cities to determine where to install Google Fiber next. That's the promised "100× faster than Comcast" service. Today, WhatsApp said it would move into free global phone calling. Neither foray makes a big difference to Comcast in 2015, assuming the (likely) approval of the TWC deal. By 2018, though, both could be taking significant chunks out of the Comcast bundle.
Fast data. Free Internet voice. Those two get right into two of three bases Comcast is covering with its Triple Play strategy. At first base, all-you-can-eat cable is in slow but steady decline. On second, Internet voice, a great throw-in replacement for landlines and — what was that thing we used to pay for? — "long-distance." On third, broadband. That, of course, is the growth plan, and, as I and others have written the reason behind the Time Warner Cable deal. ("The newsonomics of Comcast and our digital wallets") Yes, big is good and carries consumer concern. Yet the forces of digital disruption literally never sleep. So chalk up the Comcast-Netflix deal as one of an attempt at road repair, as both companies try to scan the environment that's ahead of and behind them. _Photo of alternate Netflix delivery protocol by Phil King used under a Creative Commons license._
The newsonomics of Comcast's deal and our digital wallets