Technology: July 2008 Archives

If there is one thing that troubles processor architects right now, it's working out how many cores they should stick on a die. The number of transistors they can plant on a chip doubles every two years and there's no sign of that supply running dry in the next five years.

What's the problem? Just take the processor core you have already and then step and repeat it across the die. It's worked for graphics processors.

Unfortunately, only some software parallelises so well that it will spread across many cores. Many times, the overhead of distributing the work outweighs the advantage you get from running the code in parallel. This, in effect, is the modification that Gene Amdahl made to his eponymous law of performance in computers.

In its most basic form, Amdahl's Law says it's only worth speeding up things that you do a lot. Big, nested loops are good targets. Lots of branching straight-line code? Not worth the effort. With parallel processors, if you can spread the work of loops over many of them, you see a speed-up. But there is a limit governed by how much code you need to run on just one processor.

In a paper published in this month's IEEE Computer, a pair of researchers from the University of Winsconsin-Madison - one of whom has now moved to Google - has attempted to extend Amdahl's Law to the world of multicore processors where you do not necessarily make all the processors the same size.

ARM hinted about this deal in an analyst meeting last week, but the company this morning confirmed that it has sold an architectural licence for its processor architecture to "a leading handset OEM...to develop a roadmap of mobile computing devices". The company is not saying who the customer is but a lot of the signs point to Apple.

Nokia decided to outsource much of its silicon design and the company has traditionally bought the off-the-shelf ARM cores anyway, which Texas Instruments then put into system-on-chip (SoC) parts. Motorola already has an architectural licence. Samsung would be a possibility as the world's second largest chipmaker, but signed a big deal with ARM earlier this year to get early access to ARM's own designs.

Following Apple's purchase of chip design firm PA Semiconductor, ARM people have been particularly jumpy of late whenever Apple gets mentioned. And questions asking whether Apple already has an architectural licence (the computer maker was the driving force behind the creation of ARM and one of the original investors) were met with a "you'll have to ask Apple", rather than a "yes", a "no" or a "no comment".

PA's designers have a lot of experience with ARM, although their most recent offering, which is getting dropped like a stone, was based on the PowerPC. ARM's investor meeting is about to start. But, realistically, if the company was going to say that Apple is the new architectural licensee, it would have done so already.

ARM CEO Warrren East just warned that it will take time for the company to see royalties from any products sold that use the processors licensed under the new arrangement (9:20): "It is an architectural licence with a leading OEM for both current and future technology. Don't get too excited on any revenue on this: it will take some time. The revenue [from this deal] will be recognised over several years."

ARM's emphasis on this deal is that it is all about the future. Tim Score, CFO, said (9:38): "When ARM signs architectural licences, they are typically for an architecture that is already in play. So you tend to get a big revenue bump. This one is also for future architectures, so the revenue has to be spread over a number of years."

Policy of the day

29 July 2008

Valleywag has picked up on Google-wannabe Cuil's policy of not collecting personal data on the surfers who use its search engine and asks:

Why isn't this their marketing slogan?

I don't know. Because the policy won't last?

Techcrunch's Michael Arrington wants a web tablet and, not only that, he believes it will only happen if the design is crowdsourced, claiming that the machine doesn't exist. Oh really? I've seen loads of them. It's just that they tend to be prototypes in places like the Philips HomeLab.

If you look at the Philips Research site and poke around a little, you will find pictures of a device not a million miles from the Techcrunch mock-up being used as an oversized remote control. You can see an example below. Philips Electronics has a heavily stripped-down screen-based remote that you can buy in the shops as a kind of souped-up OneForAll.

homelabtab.jpg

The problem is not making a web tablet. I don't think it's even a case of getting the price down. It's working out whether you have a big enough market for the device to ship in high enough volumes to justify the wafer-thin margins needed to deliver a $200 price on a product that has something like a 10in colour screen, processor, WiFi and a few gigs of storage.

Last week, Tom Watson, UK minister for transformational government – a title that makes you wonder if there will soon be a minister for leveraged e-government solutions – claimed Whitehall computers would be carbon neutral within four years. Apparently it would be achieved by switching them off more often. This must be some use of term 'carbon neutral' I haven't previously encountered.

Unless the plan is to run all of Whitehall's machines off solar panels, nuclear or wind energy alone, it's hard to see this plan being achievable without some serious massaging of the numbers. It's no bad thing that Watson wants to cuts the energy usage of government computers but does touting the target as being carbon neutral do anyone any favours. Because, all the Cabinet Office has to do in 2012 is buy enough offsets to make it happen no matter what the actual outcome is. All that happens in that case is that the public ends up forking out for a plan that it did not really want for the sake of a slogan.

If you read the paper by Masahiko Inouye and colleagues at the University of Toyama on their production of the first lengthy chains of double-stranded artificial DNA you wonder how analyst Ruchi Mallya managed to come up with the idea that this stuff might be the future of green IT.

Mallya postulated "a biochip that will make standard computers faster and more energy efficient".

If you read the press release from the American Chemical Society, the publisher of Inouye et al's paper, you begin to see where that idea came from. However, there is a subtle difference in meaning:

"The finding could lead to improvements in gene therapy, futuristic nano-sized computers, and other high-tech advances, [the researchers] say."

The claim on the release is slightly more believable - we're not talking about trying to reinvent conventional computing here. But even that is a stretch from what the researchers themselves claim in the actual paper:

"The artificial DNA might be applied to a future extracellular genetic system with information storage and amplifiable abilities...This type of research is primarily motivated by pure scientific exploration and eventually directed toward biomedical applications."

Datamonitor analyst Ruchi Mallya has taken a quick look at the production of the world's first strands of DNA of reasonable length that use artificial molecular groups in place of the guanine, adenine, thymine and cytosine groups found in the natural stuff. The piece asks: is artificial DNA the future of computers? Jack Schofield at the Guardian asks, naturally, is it going to be the case?

I have a short and simple answer. No. Not even close.

You can use DNA for computation but you wouldn't use it to replace any existing form of computer. It's just too darn slow. And there does not seem to be a realistic way of making logic circuits using DNA that even approach the complexity of today's silicon-based machines, let alone computers in 20 years' time.

The group that has arguably done the most work on DNA computing is at Caltech. I've seen Georg Seelig talk a couple of times on the topic and he is realistic about the potential uses for the technology.

"What is realistic is a few thousand components. We won't get to having millions of components in the same test tube," said Seelig at a recent meeting at the Royal Society.

Ruchi Mallya, an analyst at Datamonitor has pored over a paper on the creation of the first long chains of DNA made using artificial bases in place of good old guanine, adenine, thymine and cytosine. And come to the conclusion that it might be the future of computing.

The answer is no, it isn't. While writing the long answer, I felt I just had to point to this paragraph:

"In addition, unlike today's personal computers, DNA computers require minimal or no external power sources, as they run on internal energy produced during cellular reactions."

Yes, processes involving DNA don't involve a lot of energy. But cells don't produce energy, they only convert energy they have managed to store. If they did produce energy, who needs oil? We could run the world on adenosine triphosphate. It's good stuff, you get through kilos of it every day. But it just happens to be a good way of delivering energy, not creating it.

Cisco decided to hold an open day at its recently refurbished demonstration centre in Bedfont Lakes, one of those anonymous business parks almost unknown to public transport lying halfway between Heathrow Terminal 4 and the Feltham Young Offenders' Centre. No, really, it's lovely.

Apparently, we were supposed to be able to see and play with demos of a "self-learning artificial intelligence", "a high-street shop of the future", "technology being deployed to support disaster relief", "the future of healthcare" and "experience a Cisco TelePrescence meeting".

telepresence.jpg

"We hope to demonstrate some cutting edge and future concept applications of Cisco's technology - which use the power of the Internet to deliver some very powerful applications," gushed the invitation.

The reality? Let's step back, back in time to about, ooh, 1998. The self-learning AI turned out to be a flashback to the agent technology of the late 1990s, tricked out with a slightly more realistic avatar that did weird things like lean into the screen until you could only see its eyes. I am still mystified as to where the self-learning came in as the natural language processing seemed to come entirely from off-the-shelf Microsoft software and the agent was apparently programmed to obey 'business rules'.

OK, I'm going to lay off the "big bucket of bits is all you need" theory of science, computing and the future in a minute.

But not before this example of where simply using the relative frequency of words to perform spelling correction breaks down.