June 2010 Archives

Get real paid

26 June 2010

Tom Whitwell, assistant editor at The Times and responsible for developing the newspaper’s paywalled online site, did not hide his irritation at some of the helpful advice dished up by internet observers since the decision to ask people for money to read the news.

“How it’s been reported: it’s like we haven’t noticed [the problems],” said Whitwell in a session on paid-for sites at the News Rewired conference yesterday. “We have been watching Twitter and the blogs saying this is going to be difficult: ‘Don’t they realise that their audience is going to drop?’

“My favourite was one that said if we believed all our free customers would convert to paid, we would make £2bn.” Whitwell let us into a secret: “We are not expecting to make £2bn.”

Whitwell stressed: “We are not underestimating the scale of this challenge. Eighteen months ago we felt we were at a fork in the road. We could carry on as we were or we could look at something different. The free option looks a lot less appealing that we had thought at first. We were making money but not an enormous amount of money [from having a free site].

“We looked at how we could expand the site. We looked at how much money that expansion would bring in and it would not be a lot more. And a lot would be drive-by, often overseas traffic.”

The push to serve a large, drive-by audience was forcing The Times and other newspapers into a race to the bottom. “We looked at how people were doing pile-high news. They were writing 20 different versions of a story during the day just to stay at the top of Google News. And we saw how far they were getting away from their brand.

“We wanted to do what The Times stood for,” he said rather than watching starlets getting out of cars to see if they had any underwear on. In the drive-by news world, stories were about celebrity and gruesome accidents, he said.

The need to get all the money from advertising had other problems: “The barrier between commercial and journalism was getting very thin.”

So there were clear reasons for moving towards the paywall. And one reason for not going there: “The other path was terrifying. It’s a real leap into the dark. We understood immediately that this would change our relationship with the audience. So we looked at what we can do on a site where we have a real relationship with the audience.

“We could have a site with the reader at its heart. With a free site, it’s all about trying to push up the pageviews to push lots and lots of ads. At Times Online we were getting five thousand comments a day. But we didn’t feel it was a real community.”

Whitwell was not going to discuss numbers at the session although he claimed that the “figures we are seeing are very encouraging”.

Alastair Bruce from Microsoft said paywalls could work but agreed with Barry Diller’s pronouncements that “it will take some time. There are enough large corporations going after them that they will work at some time”.

Bruce showed a table of the large media organisations that either have some sort of paywall or are about to launch them, which showed that there is a large range of charging options being tried while the market settles down.

If you factor in what is happening in the trade sector, the range of options multiplies again. Karl Schneider, editorial development director at RBI, talked about the four paid-for operations that the large, Sutton-based publisher, not including the paywall that New Scientist operates.

News is a part of a number of these but it’s often data-driven news that is provided along with the raw data itself. “It’s hard to come up with a powerful offering that is only about news,” said Schneider.

A prime example is ICIS, which is a site that provides price information for industrial chemicals. The news is often aligned with that pricing information. Schneider explained that it’s not enough to simply write a story about a plant catching fire in Osaka. You have to describe its capacity versus world supply and what that will do to prices, so that companies can use it in pricing negotiations.

“It’s news that you can use. News that you can act on. You have to write it in such a way that the user can pick it up and use it,” said Schneider.

The XpertHR site for humans resources people has less data but more about employment legislation. So it provides content such as model relocation policies that these people can download and use as templates for their own policies and ‘living features’ that are online equivalents of the old loose-leaf publishing business to cover updates in legislation. This avoids the traditional problem in trade publishing where you write a feature that covers the changes but you force people to go and dig around for the details. “You can find for free the information presented on XpertHR but it’s not packaged like this,” said Schneider.

The introduction of a paywall changes the relationship with advertisers as well, as newspapers like The Times found 40 years ago when they changed their approach to cover pricing. Bruce remarked on the irony that “as soon as you become a subscriber, you suddenly become more valuable to advertisers”.

Schneider said, because advertisers can treat a small but highly relevant audience as being more valuable, the ad rates per reader can vary wildly. “At one extreme you can easily charge less than £1 per thousand on a CPM model. But you can get £50 per person in some areas. The range of prices is absolutely huge.”

The trouble is that publishers have only just started to realise how different advertising models can bring much better returns. “Content is where we have done very well. But we have been a miserable failure in ads. Look at what we have been selling to the advertisers: stuff we sold in the magazines just stuffed on the web. That is where I am optimistic about the future, because of the range of opportunities that we have,” said Schneider.

At the News Rewired conference this morning, Twitter got a lot of attention. Journalism.co.uk, who organised the event, were keen to push the #newsrw hashtag. And, naturally, during the Building Online Buzz session, Twitter emerged as one of the better mechanisms for driving traffic to a website. But another, supposedly dying technology turned out to be as important, if not more so: email.

No-one sings the praises of email much. It’s full of spam and over-CCed messages to let you know that a blue Ford Focus is clogging up the CEO’s parking space or that Kate in marketing has a pile of buns to celebrate her birthday. Yet, even for sites and campaigns that you’d expect to be flash-mobbed by a tweet from the likes of Stephen Fry, email is still one of the big sources of traffic for email.

For all the work that people put into search-engine optimisation, the search engines turn out to be pretty poor drivers of web traffic compared with the other means. It’s folks recommending stuff to other folks that drives the traffic.

“Not many people search for our website. It’s mainly people who were directed to it,” Mike Harris, public affairs manager of the Libel Reform Campaign.

Vikki Chowney, editor of Reputation Online, said: “Thirty to thirty-five per cent of our traffic comes from Twitter.”

But another third comes from email - mainly from the newsletters that the site sends out. “Our email newsletters are phenomenally popular and we still get a tremendous amount of traffic from them,” she said.

“The king will continue to be email,” said Tony Curzon-Price, editor-in-chief of Open Democracy. Although the traffic drivers are still changing - Curzon-Price went through a potted history of web traffic generators from listservs through to Facebook - he pointed out that some communities are still very focused on supposedly old-hat things such as web-based bulletin boards and forums.

In his afternoon keynote, Marc Reeves, editor of the Business Desk West Midlands, said: “Eighty per cent of our web traffic is driven in the hour-and-a-half after the email is sent, so we know it works.”

“Because we are not playing the SEO game, the headlines are really important. It’s the headline that drives the traffic through. We are not bothered with SEO juicing because the minority of our readers come from the search engines,” said Reeves.

If the Web 2.0 stuff worked for the site, Reeves would use them more, although it does have a Twitter feed and the other things you would expect. But, he emphasises: “Our readers are not Web 2.0-enabled so why force them down that path?”

I've started reading You Are Not A Gadget by Jaron Lanier. The book has a good premise, not being the techno-utopian screed I'd feared. Anything that takes a pop at the future of the hive mind overlords gets instant points from me. But it doesn't get off to an auspicious start when Lanier takes on the failings of MIDI. It has plenty of failings but Lanier omits two important details about the standard for transmitting data to control electronic musical instruments.

The most glaring error - although it's probably more for the purposes of hyperbole than through ignorance - is that MIDI only transmits note-on and note-off messages. Even early MIDI synths respond to more than that. Yamaha was very keen to make the DX series synths respond to a breath controller because of concerns over expressiveness. The level of expressiveness was limited compared to an analogue synths for a long time but the basics were there. And hardly anybody ever bothered to use the breath controller input except for keen experimentalists such as Michael Brecker.

In an interview in the 1990s, Brian Eno asked for an instrument like the DX7, which he used heavily, that would have a lot more means of articulation but all that was possible with MIDI. No-one stepped up to make Eno's dream synth. The charts are, as Lanier complains, full of mechanistic music but this is due to artist and consumer choice - as evidenced by the seemingly endless litany of Ministry of Sound compilations. But in the margins, people have dealt with the limitations of MIDI and are beginning to transcend them.

A legitimate complaint against MIDI is its atrocious data resolution. Working in the 1980s, the designers had the limitation of a slow serial communications link to deal with. So all the controllers, other than pitch-bend, were confined to a resolution of 8 bits - just 128 discrete steps. That's pretty granular although the net effect is not as bad it seems.

Film composers are able to produce convincing soundtracks - augmented by live orchestras only in more lavish productions - using banks of MIDI-controlled samplers. Most of these are now realised in software so they work around the poor speed of hardware MIDI but the core protocol is the same. People who want to avoid some of the workarounds needed for MIDI are now using protocols such as Open Sound Control (OSC), which is way more flexible than MIDI ever was. It makes possible new instrument controllers such as the Eigenharp. It looks like Darth Vader's bassoon but it's one of a new generation of electronic musical instruments that don't seem at all affected by Lanier's MIDI lock-in, other than a lack of interest from commercial music producers.

The Eigenharp has its own problems. It provides an impressive array of sensitive controllers but needs some work in the usability department as it makes Boehm fingering on a woodwind instrument seem like a triumph of ergonomics. But it's hardly constrained by the tyranny of an 1980s hardware protocol.

Pointing to mechanical music, with MIDI at it's core, is an emotive argument. But that's all it is once you dig into the detail. That doesn't really bode well for Lanier's book even when I'm sympathetic to his core premise.

Dave Winer wants to present text that suits skim-readers by folding away extraneous detail. If you look at the example that Winer presents, it's hard to get away from the thought that, although it looks workable at first glance, he has basically reinvented the footnote. And in such a way that the writing become unnecessarily clogged up with footnotes.

Direct action

6 June 2010

The often less-than-happy link between journalism and PR is breaking apart as PRs look to do more that is aimed direct at consumers, as reported in the Independent. To be honest, I thought trade/B2B sector would see this first, pointing to the possibility back in 2007. I forgot to factor in the larger amount of money that goes into consumer PR, which could translate into a greater willingness to take chances.

Despite an apparent trend driven by appointments of journalists by PRs that are not account-director roles, it's worth having a closer look at the examples of direct PR that Edelman cites. They are not dramatically different from the work already done by agencies where they expected just press coverage in the past or were creating material for user or sales meetings.

Although PRs might have grasped the idea that direct communication with consumers is worthwhile, there is a big question mark over whether their clients will like what they plan. The advantage for the PR of having the press in the way is, to an extent, deniability. They might be aware of the consequences of using a particular approach but if it goes pear-shaped, it's still possible to blame the journalist for "misunderstanding the message". If you take away that layer, the people in charge of promotion or engagement or whatever you want to call it are far more exposed.

So, in the short term, I'd expect the opposite of what should happen to take place. The trend in recent years, despite all the talk about engagement and two-way communication, has been to sell, sell, sell. Don't go off-message, no matter how dull that message might be. Because no-one is going to get fired for sticking to the pre-approved script. At least not until companies start to see their profiles become less and less prominent. Then they might have a go at proper communication.

This will have an effect on the way the media operates but learning to work around the relentless stream of dull, self-serving messages has been part of the game for a while.

Link culture

1 June 2010

You’d think after 20 years, people would have worked out how to compose links on the World Wide Web. But Nick Carr, who is very exercised about distraction in modern society…

Oh look, kittens.

…has wondered aloud whether inline links, the very stuff of blog-writing, are a good idea or not. Because they convince people to stop reading here and go reading there.

One post recommended a technological solution: get the reader to decide how the links should appear. (And if you go there, you should at some point find a lot of what appears below in a comment). But this is applying a technological solution to a cognitive problem.

People need to take a step back and consider why inline linking gets used. I have to write using a number of different styles which use either inline links or links at the end. The two styles of writing turn out to be quite different – and I’ve argued against house styles using one or the other in different contexts because of this issue.

Inline linking became popular largely due to blogging and is useful because it allows you to construct a post quickly – all I have to do is put in the link and assume if the reader is not up to speed on the subject they will click to find out. Those that are aware of what’s at the end of the link don’t have to read through yet another description of what the link’s endpoint says, which is what happens if you bung the links at the end (and then provide some more description to remind people what the links are all about).

So, I’d argue for someone who is aware of a thread of stories, the inline link format is less distracting because the knowledgeable reader does not have to wade through stuff they already know.

However, if you want to do long-form writing, and feel that there is an audience for it, then presenting the text link-free, maybe with a Javascript-assisted hiding scheme, is arguably the better bet. In that case, an inline link plus the description is arguably a form of tautology.

It’s also worth bearing in mind that authors play with distraction all the time in the interest of maintaining interest in a story by scene shifting. With inline links, you’re just inviting the reader to do their own scene shifting if they feel like it.

Thinking about the Javascript angle, perhaps what would be handy would be a flag button or “open in underlying tab” so you’ve got the links for the sections that most piqued your interest when you’ve finished reading. Think of it as the Getting Things Done approach to dealing with inline links - stop thinking about that link now, make a note, deal with it later with your full attention. (There’s probably some Firefox addon for this isn’t there?)

Media consumption

1 June 2010

Once I got past wondering whether there's a maximum height restriction for the San Francisco MUNI (just look at the legroom on those seats), my next thought on looking at this picture was how only one of them seemed to be interacting with their media device in any way beyond just looking at it.