July 2008 Archives

If there is one thing that troubles processor architects right now, it's working out how many cores they should stick on a die. The number of transistors they can plant on a chip doubles every two years and there's no sign of that supply running dry in the next five years.

What's the problem? Just take the processor core you have already and then step and repeat it across the die. It's worked for graphics processors.

Unfortunately, only some software parallelises so well that it will spread across many cores. Many times, the overhead of distributing the work outweighs the advantage you get from running the code in parallel. This, in effect, is the modification that Gene Amdahl made to his eponymous law of performance in computers.

In its most basic form, Amdahl's Law says it's only worth speeding up things that you do a lot. Big, nested loops are good targets. Lots of branching straight-line code? Not worth the effort. With parallel processors, if you can spread the work of loops over many of them, you see a speed-up. But there is a limit governed by how much code you need to run on just one processor.

In a paper published in this month's IEEE Computer, a pair of researchers from the University of Winsconsin-Madison - one of whom has now moved to Google - has attempted to extend Amdahl's Law to the world of multicore processors where you do not necessarily make all the processors the same size.

Yesterday, yet another in the wave of "press release distribution companies" sent me by email a release for femtocell-maker Ip.access. So far, so good. "Femtocells," thinks I. "I'm writing something on femtocells, better have a look."

It's nothing more than saying the company has a white paper on how funky femtocells will be. But, there is a chance there could be something useful in it and I was just beginning to line up interviews. So, I first went to the link to download the file...and found out I need to fill in a form to get it. I'm not over-fussed about filling in a form but, as this is likely to go into some form of CRM system, I figure it's just as easy to save a salesperson a call and get it from the PR. Who, of course, will be named at the bottom of the release and set up the interview at the same time.

I hit Reply and start banging out the email only to notice an odd bit of text in the introduction "...please do not hesitate to contact them via the details below". Then I realise that Neondrum, who sent the release is only distributing the release, with all the usual disclaimers: "[We] cannot accept any liability whatsoever for the inaccuracy or otherwise of any information contained in this news release" etc. All they're going to do is tell me to contact the client. OK fine.

But there are no other contact details.

I asked Neondrum to send them over. Ten hours later (in the meantime, I'd found that CompanyCare had been looking after Ip.access and contacted them directly) I got a reply:

"Sorry, the introductory message was badly worded - ip.access haven't provided a specific contact for this media advisory, if you want to find out more you need to download the paper."

Thanks. That's so helpful. And from a company that publishes a booklet that it claims contains ten top tips for online PR. I wonder if "always provide a contact number" is in there. I'd find out, but you have to register as a client at Neondrum to stand a chance of getting it. If someone has a copy, send it over, I could do with a laugh.

The hype cycle works quickly these days. At about 9pm US Pacific time, Techcrunch published its first story on the search engine startup Cuil. It was far from being the only site with the story around that time: the company had told a bunch of bloggers and journalists about its plans the week before with the aim of seeing it all come out in a big splurge on Monday, 28 July, 12am US Eastern.

A few hours later, the Cuil site died. Oops. But, no mind, just the effect of thousands of people hitting the site to see how it performed versus Google. "Flatlining right after your launch is more of a rite of passage than an embarrassment."

A day later and the euphoria had gone. "The story quickly turned from Google killer to Google's lunch."

Getting a backlash so quickly? "This was entirely the company's own fault. It pre-briefed every blogger and tech journalist on the planet, but didn't allow anyone to actually the test the search engine before the launch," complained Erick Schonfeld.

And you're surprised? Who says the old promotional tricks don't work?

ARM hinted about this deal in an analyst meeting last week, but the company this morning confirmed that it has sold an architectural licence for its processor architecture to "a leading handset OEM...to develop a roadmap of mobile computing devices". The company is not saying who the customer is but a lot of the signs point to Apple.

Nokia decided to outsource much of its silicon design and the company has traditionally bought the off-the-shelf ARM cores anyway, which Texas Instruments then put into system-on-chip (SoC) parts. Motorola already has an architectural licence. Samsung would be a possibility as the world's second largest chipmaker, but signed a big deal with ARM earlier this year to get early access to ARM's own designs.

Following Apple's purchase of chip design firm PA Semiconductor, ARM people have been particularly jumpy of late whenever Apple gets mentioned. And questions asking whether Apple already has an architectural licence (the computer maker was the driving force behind the creation of ARM and one of the original investors) were met with a "you'll have to ask Apple", rather than a "yes", a "no" or a "no comment".

PA's designers have a lot of experience with ARM, although their most recent offering, which is getting dropped like a stone, was based on the PowerPC. ARM's investor meeting is about to start. But, realistically, if the company was going to say that Apple is the new architectural licensee, it would have done so already.

ARM CEO Warrren East just warned that it will take time for the company to see royalties from any products sold that use the processors licensed under the new arrangement (9:20): "It is an architectural licence with a leading OEM for both current and future technology. Don't get too excited on any revenue on this: it will take some time. The revenue [from this deal] will be recognised over several years."

ARM's emphasis on this deal is that it is all about the future. Tim Score, CFO, said (9:38): "When ARM signs architectural licences, they are typically for an architecture that is already in play. So you tend to get a big revenue bump. This one is also for future architectures, so the revenue has to be spread over a number of years."

Although Seth Finkelstein has debunked the idea that Knol is being promoted too heavily in Google's search listings, a lot of people reckon that the number-one search engine is rapidly losing its way. That Knol is a big mistake that results from policies that favour Google's ad business over its search service.

Knol is a magnet for the get-rich-quick brigade who reckon they can siphon off a load of money through ads for dodgy health supplements. It might work as a competitor to Squidoo, Mahalo and even Wikipedia. But, a lot will depend on the image that Knol attracts in the short term. It's got a good chance of becoming the .info of information and quick reference sites, where the only people who show up are spammers with slightly more original content.

But does that matter to Google? Regarding Google's business as being in search is a mistake. It's an advertising business. And one of the unfortunate drivers of the online classified ad business that the company now effectively dominates is that a bunch of people are only too happy to click on ads for the 'health supplements', teeth whiteners and other kind of products being actively promoted on Knol pages. They may well be the most active ad-clickers around.

There's a good chance that Google will make more cash out of the dodgier Knol pages than the ones designed to look more like entries in an encyclopedia.

People are misreading Google's slogan, "Don't be evil". It's not a slogan. It's an admonishment to those sucking on the Adsense teat: "Don't be evil...or we'll kick you off the search results pages. You can be a bit naughty, mind."

While things are good for Google, nobody will really care:

"You show them you have in you something that is really profitable, and then there will be no limits to the recognition of your ability. Of course you must take care of the motives - right motives - always." - Mr Kurtz, Heart of Darkness

Policy of the day

29 July 2008

Valleywag has picked up on Google-wannabe Cuil's policy of not collecting personal data on the surfers who use its search engine and asks:

Why isn't this their marketing slogan?

I don't know. Because the policy won't last?

In the case of Giles Coren's purple-tinged prose, the Times subs were right anyway. He complains that by interfering with the onanistic euphuism of his final paragraph, the subs ruined the money shot. Removing an indefinite article led to a premature conclusion. There was no firm climax for Coren, but the whimper of an unstressed syllable.

In the letter, Coren lets the Times, and now us, know that Soho is associated with sex. So the whole thing about "wondering where to go for a nosh" was very important. Should they ever resurrect Round the Horne, I'll be sure to point them in your direction.

The first commenter at Guido Fawkes (and no doubt commenters at other places, I didn't look that hard) pointed out that if dear sensitive Giles wants his glorious copy to never feel the cold caress of a sub-editor, he should give up writing for the Thunderer and just post reviews to a blog. He can be secure in the knowledge that no-one will ever cause him to be seen finishing a review with the indignity of an unstressed syllable.

I can just imagine the subs now doing all they can to ensure that Coren's promise is never broken. He will always go out with a bang. In the meantime, maybe the Guardian's Media Monkey can expect a spanking from the pishkeh of epistles.

Techcrunch's Michael Arrington wants a web tablet and, not only that, he believes it will only happen if the design is crowdsourced, claiming that the machine doesn't exist. Oh really? I've seen loads of them. It's just that they tend to be prototypes in places like the Philips HomeLab.

If you look at the Philips Research site and poke around a little, you will find pictures of a device not a million miles from the Techcrunch mock-up being used as an oversized remote control. You can see an example below. Philips Electronics has a heavily stripped-down screen-based remote that you can buy in the shops as a kind of souped-up OneForAll.

homelabtab.jpg

The problem is not making a web tablet. I don't think it's even a case of getting the price down. It's working out whether you have a big enough market for the device to ship in high enough volumes to justify the wafer-thin margins needed to deliver a $200 price on a product that has something like a 10in colour screen, processor, WiFi and a few gigs of storage.

Last week, Tom Watson, UK minister for transformational government – a title that makes you wonder if there will soon be a minister for leveraged e-government solutions – claimed Whitehall computers would be carbon neutral within four years. Apparently it would be achieved by switching them off more often. This must be some use of term 'carbon neutral' I haven't previously encountered.

Unless the plan is to run all of Whitehall's machines off solar panels, nuclear or wind energy alone, it's hard to see this plan being achievable without some serious massaging of the numbers. It's no bad thing that Watson wants to cuts the energy usage of government computers but does touting the target as being carbon neutral do anyone any favours. Because, all the Cabinet Office has to do in 2012 is buy enough offsets to make it happen no matter what the actual outcome is. All that happens in that case is that the public ends up forking out for a plan that it did not really want for the sake of a slogan.

If you read the paper by Masahiko Inouye and colleagues at the University of Toyama on their production of the first lengthy chains of double-stranded artificial DNA you wonder how analyst Ruchi Mallya managed to come up with the idea that this stuff might be the future of green IT.

Mallya postulated "a biochip that will make standard computers faster and more energy efficient".

If you read the press release from the American Chemical Society, the publisher of Inouye et al's paper, you begin to see where that idea came from. However, there is a subtle difference in meaning:

"The finding could lead to improvements in gene therapy, futuristic nano-sized computers, and other high-tech advances, [the researchers] say."

The claim on the release is slightly more believable - we're not talking about trying to reinvent conventional computing here. But even that is a stretch from what the researchers themselves claim in the actual paper:

"The artificial DNA might be applied to a future extracellular genetic system with information storage and amplifiable abilities...This type of research is primarily motivated by pure scientific exploration and eventually directed toward biomedical applications."

Datamonitor analyst Ruchi Mallya has taken a quick look at the production of the world's first strands of DNA of reasonable length that use artificial molecular groups in place of the guanine, adenine, thymine and cytosine groups found in the natural stuff. The piece asks: is artificial DNA the future of computers? Jack Schofield at the Guardian asks, naturally, is it going to be the case?

I have a short and simple answer. No. Not even close.

You can use DNA for computation but you wouldn't use it to replace any existing form of computer. It's just too darn slow. And there does not seem to be a realistic way of making logic circuits using DNA that even approach the complexity of today's silicon-based machines, let alone computers in 20 years' time.

The group that has arguably done the most work on DNA computing is at Caltech. I've seen Georg Seelig talk a couple of times on the topic and he is realistic about the potential uses for the technology.

"What is realistic is a few thousand components. We won't get to having millions of components in the same test tube," said Seelig at a recent meeting at the Royal Society.

When memes attack

18 July 2008

If you've seen a blog in the last week or so, you've probably noticed people going through a list of 100 books supposedly put together by the US National Endowment for the Arts' Big Read programme. I first came across it at the Diary of a Wordsmith, who has spotted two versions. One is the "US version" and one is the "UK version". But, there is no US version.

The blog meme has become the social networking equivalent of the chain letter and, often, contains about as much truth. The claim behind this top 100 list is that the NEA has put out this list to publicise a reading programme, claiming that the average American has read only six of the hundred titles the "NEA has printed".

Ruchi Mallya, an analyst at Datamonitor has pored over a paper on the creation of the first long chains of DNA made using artificial bases in place of good old guanine, adenine, thymine and cytosine. And come to the conclusion that it might be the future of computing.

The answer is no, it isn't. While writing the long answer, I felt I just had to point to this paragraph:

"In addition, unlike today's personal computers, DNA computers require minimal or no external power sources, as they run on internal energy produced during cellular reactions."

Yes, processes involving DNA don't involve a lot of energy. But cells don't produce energy, they only convert energy they have managed to store. If they did produce energy, who needs oil? We could run the world on adenosine triphosphate. It's good stuff, you get through kilos of it every day. But it just happens to be a good way of delivering energy, not creating it.

Cisco decided to hold an open day at its recently refurbished demonstration centre in Bedfont Lakes, one of those anonymous business parks almost unknown to public transport lying halfway between Heathrow Terminal 4 and the Feltham Young Offenders' Centre. No, really, it's lovely.

Apparently, we were supposed to be able to see and play with demos of a "self-learning artificial intelligence", "a high-street shop of the future", "technology being deployed to support disaster relief", "the future of healthcare" and "experience a Cisco TelePrescence meeting".

telepresence.jpg

"We hope to demonstrate some cutting edge and future concept applications of Cisco's technology - which use the power of the Internet to deliver some very powerful applications," gushed the invitation.

The reality? Let's step back, back in time to about, ooh, 1998. The self-learning AI turned out to be a flashback to the agent technology of the late 1990s, tricked out with a slightly more realistic avatar that did weird things like lean into the screen until you could only see its eyes. I am still mystified as to where the self-learning came in as the natural language processing seemed to come entirely from off-the-shelf Microsoft software and the agent was apparently programmed to obey 'business rules'.

OK, I'm going to lay off the "big bucket of bits is all you need" theory of science, computing and the future in a minute.

But not before this example of where simply using the relative frequency of words to perform spelling correction breaks down.